Science.gov

Sample records for 3d video coding

  1. Video coding and transmission standards for 3D television — a survey

    NASA Astrophysics Data System (ADS)

    Buchowicz, A.

    2013-03-01

    The emerging 3D television systems require effective techniques for transmission and storage of data representing a 3-D scene. The 3-D scene representations based on multiple video sequences or multiple views plus depth maps are especially important since they can be processed with existing video technologies. The review of the video coding and transmission techniques is presented in this paper.

  2. Standards-based approaches to 3D and multiview video coding

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.

    2009-08-01

    The extension of video applications to enable 3D perception, which typically is considered to include a stereo viewing experience, is emerging as a mass market phenomenon, as is evident from the recent prevalence of 3D major cinema title releases. For high quality 3D video to become a commonplace user experience beyond limited cinema distribution, adoption of an interoperable coded 3D digital video format will be needed. Stereo-view video can also be studied as a special case of the more general technologies of multiview and "free-viewpoint" video systems. The history of standardization work on this topic is actually richer than people may typically realize. The ISO/IEC Moving Picture Experts Group (MPEG), in particular, has been developing interoperability standards to specify various such coding schemes since the advent of digital video as we know it. More recently, the ITU-T Visual Coding Experts Group (VCEG) has been involved as well in the Joint Video Team (JVT) work on development of 3D features for H.264/14496-10 Advanced Video Coding, including Multiview Video Coding (MVC) extensions. This paper surveys the prior, ongoing, and anticipated future standardization efforts on this subject to provide an overview and historical perspective on feasible approaches to 3D and multiview video coding.

  3. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  4. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools.

  5. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  6. The future of 3D and video coding in mobile and the internet

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  7. 3-D model-based frame interpolation for distributed video coding of static scenes.

    PubMed

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.

  8. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  9. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  10. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  11. The Emerging MVC Standard for 3D Video Services

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Wang, Ye-Kui; Ugur, Kemal; Hannuksela, Miska M.; Lainema, Jani; Gabbouj, Moncef

    2008-12-01

    Multiview video has gained a wide interest recently. The huge amount of data needed to be processed by multiview applications is a heavy burden for both transmission and decoding. The joint video team has recently devoted part of its effort to extend the widely deployed H.264/AVC standard to handle multiview video coding (MVC). The MVC extension of H.264/AVC includes a number of new techniques for improved coding efficiency, reduced decoding complexity, and new functionalities for multiview operations. MVC takes advantage of some of the interfaces and transport mechanisms introduced for the scalable video coding (SVC) extension of H.264/AVC, but the system level integration of MVC is conceptually more challenging as the decoder output may contain more than one view and can consist of any combination of the views with any temporal level. The generation of all the output views also requires careful consideration and control of the available decoder resources. In this paper, multiview applications and solutions to support generic multiview as well as 3D services are introduced. The proposed solutions, which have been adopted to the draft MVC specification, cover a wide range of requirements for 3D video related to interface, transport of the MVC bitstreams, and MVC decoder resource management. The features that have been introduced in MVC to support these solutions include marking of reference pictures, supporting for efficient view switching, structuring of the bitstream, signalling of view scalability supplemental enhancement information (SEI) and parallel decoding SEI.

  12. 3-D video techniques in endoscopic surgery.

    PubMed

    Becker, H; Melzer, A; Schurr, M O; Buess, G

    1993-02-01

    Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany. PMID:8050009

  13. DYNA3D Code Practices and Developments

    SciTech Connect

    Lin, L.; Zywicz, E.; Raboin, P.

    2000-04-21

    DYNA3D is an explicit, finite element code developed to solve high rate dynamic simulations for problems of interest to the engineering mechanics community. The DYNA3D code has been under continuous development since 1976[1] by the Methods Development Group in the Mechanical Engineering Department of Lawrence Livermore National Laboratory. The pace of code development activities has substantially increased in the past five years, growing from one to between four and six code developers. This has necessitated the use of software tools such as CVS (Concurrent Versions System) to help manage multiple version updates. While on-line documentation with an Adobe PDF manual helps to communicate software developments, periodically a summary document describing recent changes and improvements in DYNA3D software is needed. The first part of this report describes issues surrounding software versions and source control. The remainder of this report details the major capability improvements since the last publicly released version of DYNA3D in 1996. Not included here are the many hundreds of bug corrections and minor enhancements, nor the development in DYNA3D between the manual release in 1993[2] and the public code release in 1996.

  14. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  15. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  16. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  17. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  18. Development of MPEG standards for 3D and free viewpoint video

    NASA Astrophysics Data System (ADS)

    Smolic, Aljoscha; Kimata, Hideaki; Vetro, Anthony

    2005-11-01

    An overview of 3D and free viewpoint video is given in this paper with special focus on related standardization activities in MPEG. Free viewpoint video allows the user to freely navigate within real world visual scenes, as known from virtual worlds in computer graphics. Suitable 3D scene representation formats are classified and the processing chain is explained. Examples are shown for image-based and model-based free viewpoint video systems, highlighting standards conform realization using MPEG-4. Then the principles of 3D video are introduced providing the user with a 3D depth impression of the observed scene. Example systems are described again focusing on their realization based on MPEG-4. Finally multi-view video coding is described as a key component for 3D and free viewpoint video systems. MPEG is currently working on a new standard for multi-view video coding. The conclusion is that the necessary technology including standard media formats for 3D and free viewpoint is available or will be available in the near future, and that there is a clear demand from industry and user side for such applications. 3DTV at home and free viewpoint video on DVD will be available soon, and will create huge new markets.

  19. Super deep 3D images from a 3D omnifocus video camera.

    PubMed

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  20. 3D Multigroup Sn Neutron Transport Code

    2001-02-14

    ATTILA is a 3D multigroup transport code with arbitrary order ansotropic scatter. The transport equation is solved in first order form using a tri-linear discontinuous spatial differencing on an arbitrary tetrahedral mesh. The overall solution technique is source iteration with DSA acceleration of the scattering source. Anisotropic boundary and internal sources may be entered in the form of spherical harmonics moments. Alpha and k eigenvalue problems are allowed, as well as fixed source problems. Forwardmore » and adjoint solutions are available. Reflective, vacumn, and source boundary conditions are available. ATTILA can perform charged particle transport calculations using slowing down (CSD) terms. ATTILA can also be used to peform infra-red steady-state calculations for radiative transfer purposes.« less

  1. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  2. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  3. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  4. Stereoscopic 3D video games and their effects on engagement

    NASA Astrophysics Data System (ADS)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  5. Geographic Video 3d Data Model And Retrieval

    NASA Astrophysics Data System (ADS)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  6. Cylindrical 3D video display observable from all directions

    NASA Astrophysics Data System (ADS)

    Endo, Tomohiro; Kajiki, Yoshihiro; Honda, Toshio; Sato, Makoto

    2000-05-01

    We propose a 3D video displaying technique that multiple viewers can observe 3D images from 360 degrees of arc horizontally without 3D glasses. This technique uses a cylindrical parallax barrier and 1D light source array. We have developed an experimental display using this technique and have demonstrated 3D images observable form 360 degrees of arc horizontally without 3D glasses. Since this technique is based on the parallax panoramagram, the parallax number and resolution are limited by the diffraction at the parallax barrier. To avoid these limits, we improved the technique by revolving the parallax barrier. We have been developing a new experimental display using this improved technique. The display is capable of displaying cylindrical 3D video images within the diameter of 100 mm and the height of 128 mm. Images are described with the resolution of 1254 pixels circularly and 128 pixels vertically, and refreshed at 30Hz. Each pixel has the viewing angle of 60 degrees and that is divided into 70 views, therefore the angular parallax interval of each pixel is less than 1 degree. In such a case, observers may barely perceive parallax discretely. The pixels are arranged on a cylinder surface, therefore produced 3D images can be observed from all directions.

  7. Use scenarios: mobile 3D television and video

    NASA Astrophysics Data System (ADS)

    Strohmeier, Dominik; Weitzel, Mandy; Jumisko-Pyykkö, Satu

    2009-02-01

    The focus of 3D television and video has been in technical development while hardly any attention has been paid on user expectations and needs of related applications. The object of the study is to examine user requirements for mobile 3D television and video in depth. We conducted two qualitative studies, focus groups and probe studies, to improve the understanding of user approach. Eight focus groups were carried out with altogether 46 participants focusing on use scenario development. The data-collection of the probe study was done over the period of 4 weeks in the field with nine participants to reveal intrinsic user needs and expectations. Both studies were conducted and analyzed independently so that they did not influence each other. The results of both studies provide novel aspects of users, system and content, and context of use. In the paper, we present personas as first archetype users of mobile 3D television and video. Putting these personas into contexts, we summarize the results of our studies and previous related work in the form of use scenarios to guide the user-centered development of 3D television and video.

  8. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  9. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  10. Efficient streaming of stereoscopic depth-based 3D videos

    NASA Astrophysics Data System (ADS)

    Temel, Dogancan; Aabed, Mohammed; Solh, Mashhour; AlRegib, Ghaassan

    2013-02-01

    In this paper, we propose a method to extract depth from motion, texture and intensity. We first analyze the depth map to extract a set of depth cues. Then, based on these depth cues, we process the colored reference video, using texture, motion, luminance and chrominance content, to extract the depth map. The processing of each channel in the YCRCB-color space is conducted separately. We tested this approach on different video sequences with different monocular properties. The results of our simulations show that the extracted depth maps generate a 3D video with quality close to the video rendered using the ground truth depth map. We report objective results using 3VQM and subjective analysis via comparison of rendered images. Furthermore, we analyze the savings in bitrate as a consequence of eliminating the need for two video codecs, one for the reference color video and one for the depth map. In this case, only the depth cues are sent as a side information to the color video.

  11. Quality assessment of adaptive 3D video streaming

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Gutiérrez, Jesús; García, Narciso

    2013-03-01

    The streaming of 3D video contents is currently a reality to expand the user experience. However, because of the variable bandwidth of the networks used to deliver multimedia content, a smooth and high-quality playback experience could not always be guaranteed. Using segments in multiple video qualities, HTTP adaptive streaming (HAS) of video content is a relevant advancement with respect to classic progressive download streaming. Mainly, it allows resolving these issues by offering significant advantages in terms of both user-perceived Quality of Experience (QoE) and resource utilization for content and network service providers. In this paper we discuss the impact of possible HAS client's behavior while adapting to the network capacity on enduser. This has been done through an experiment of testing the end-user response to the quality variation during the adaptation procedure. The evaluation has been carried out through a subjective test of the end-user response to various possible clients' behaviors for increasing, decreasing, and oscillation of quality in 3D video. In addition, some of the HAS typical impairments during the adaptation has been simulated and their effects on the end-user perception are assessed. The experimental conclusions have made good insight into the user's response to different adaptation scenarios and visual impairments causing the visual discomfort that can be used to develop the adaptive streaming algorithm to improve the end-user experience.

  12. Multiple 2D video/3D medical image registration algorithm

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    2000-06-01

    In this paper we propose a novel method to register at least two vide images to a 3D surface model. The potential applications of such a registration method could be in image guided surgery, high precision radiotherapy, robotics or computer vision. Registration is performed by optimizing a similarity measure with respect to the pose parameters. The similarity measure is based on 'photo-consistency' and computes for each surface point, how consistent the corresponding video image information in each view is with a lighting model. We took four video views of a volunteer's face, and used an independent method to reconstruct a surface that was intrinsically registered to the four views. In addition, we extracted a skin surface from the volunteer's MR scan. The surfaces were misregistered from a gold standard pose and our algorithm was used to register both types of surfaces to the video images. For the reconstructed surface, the mean 3D error was 1.53 mm. For the MR surface, the standard deviation of the pose parameters after registration ranged from 0.12 to 0.70 mm and degrees. The performance of the algorithm is accurate, precise and robust.

  13. Video retargeting for stereoscopic content under 3D viewing constraints

    NASA Astrophysics Data System (ADS)

    Chamaret, C.; Boisson, G.; Chevance, C.

    2012-03-01

    The imminent deployment of new devices such as TV, tablet, smart phone supporting stereoscopic display creates a need for retargeting the content. New devices bring their own aspect ratio and potential small screen size. Aspect ratio conversion becomes mandatory and an automatic solution will be of high value especially if it maximizes the visual comfort. Some issues inherent to 3D domain are considered in this paper: no vertical disparity, no object having negative disparity (outward perception) on the border of the cropping window. A visual attention model is applied on each view and provides saliency maps with most attractive pixels. Dedicated 3D retargeting correlates the 2D attention maps for each view as well as additional computed information to ensure the best cropping window. Specific constraints induced by 3D experience influence the retargeted window through the map computation presenting objects that should not be cropped. The comparison with original content of 2:35 ratio having black stripes provide limited 3D experience on TV screen, while the automatic cropping and exploitation of full screen show more immersive experience. The proposed system is fully automatic, ensures a good final quality without missing fundamental parts for the global understanding of the scene. Eye-tracking data recorded on stereoscopic content have been confronted to retargeted window in order to ensure that the most attractive areas are inside the final video.

  14. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  15. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  16. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    SciTech Connect

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  17. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  18. Statistical bias in 3-D reconstruction from a monocular video.

    PubMed

    Roy-Chowdhury, Amit K; Chellappa, Rama

    2005-08-01

    The present state-of-the-art in computing the error statistics in three-dimensional (3-D) reconstruction from video concentrates on estimating the error covariance. A different source of error which has not received much attention is the fact that the reconstruction estimates are often significantly statistically biased. In this paper, we derive a precise expression for the bias in the depth estimate, based on the continuous (differentiable) version of structure from motion (SfM). Many SfM algorithms, or certain portions of them, can be posed in a linear least-squares (LS) framework Ax = b. Examples include initialization procedures for bundle adjustment or algorithms that alternately estimate depth and camera motion. It is a well-known fact that the LS estimate is biased if the system matrix A is noisy. In SfM, the matrix A contains point correspondences, which are always difficult to obtain precisely; thus, it is expected that the structure and motion estimates in such a formulation of the problem would be biased. Existing results on the minimum achievable variance of the SfM estimator are extended by deriving a generalized Cramer-Rao lower bound. A detailed analysis of the effect of various camera motion parameters on the bias is presented. We conclude by presenting the effect of bias compensation on reconstructing 3-D face models from rendered images. PMID:16121454

  19. Real-time 3D video conference on generic hardware

    NASA Astrophysics Data System (ADS)

    Desurmont, X.; Bruyelle, J. L.; Ruiz, D.; Meessen, J.; Macq, B.

    2007-02-01

    Nowadays, video-conference tends to be more and more advantageous because of the economical and ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses generic hardware to achieve an economically realistic implementation. The basic functions of the system are to capture the scene, transmit it through digital networks to other partners, and then render it according to each partner's viewing characteristics. The image processing part should run in real-time. We propose to analyze the whole system. it can be split into different services like central processing unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network. Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection and tracking of faces from the video stream. However, the processing needs to be parallelized in several threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal with them.

  20. Three-dimensional subband coding of video.

    PubMed

    Podilchuk, C I; Jayant, N S; Farvardin, N

    1995-01-01

    We describe and show the results of video coding based on a three-dimensional (3-D) spatio-temporal subband decomposition. The results include a 1-Mbps coder based on a new adaptive differential pulse code modulation scheme (ADPCM) and adaptive bit allocation. This rate is useful for video storage on CD-ROM. Coding results are also shown for a 384-kbps rate that are based on ADPCM for the lowest frequency band and a new form of vector quantization (geometric vector quantization (GVQ)) for the data in the higher frequency bands. GVQ takes advantage of the inherent structure and sparseness of the data in the higher bands. Results are also shown for a 128-kbps coder that is based on an unbalanced tree-structured vector quantizer (UTSVQ) for the lowest frequency band and GVQ for the higher frequency bands. The results are competitive with traditional video coding techniques and provide the motivation for investigating the 3-D subband framework for different coding schemes and various applications. PMID:18289965

  1. Image quality of up-converted 2D video from frame-compatible 3D video

    NASA Astrophysics Data System (ADS)

    Speranza, Filippo; Tam, Wa James; Vázquez, Carlos; Renaud, Ronald; Blanchfield, Phil

    2011-03-01

    In the stereoscopic frame-compatible format, the separate high-definition left and high-definition right views are reduced in resolution and packed to fit within the same video frame as a conventional two-dimensional high-definition signal. This format has been suggested for 3DTV since it does not require additional transmission bandwidth and entails only small changes to the existing broadcasting infrastructure. In some instances, the frame-compatible format might be used to deliver both 2D and 3D services, e.g., for over-the-air television services. In those cases, the video quality of the 2D service is bound to decrease since the 2D signal will have to be generated by up-converting one of the two views. In this study, we investigated such loss by measuring the perceptual image quality of 1080i and 720p up-converted video as compared to that of full resolution original 2D video. The video was encoded with either a MPEG-2 or a H.264/AVC codec at different bit rates and presented for viewing with either no polarized glasses (2D viewing mode) or with polarized glasses (3D viewing mode). The results confirmed a loss of video quality of the 2D video up-converted material. The loss due to the sampling processes inherent to the frame-compatible format was rather small for both 1080i and 720p video formats; the loss became more substantial with encoding, particularly for MPEG-2 encoding. The 3D viewing mode provided higher quality ratings, possibly because the visibility of the degradations was reduced.

  2. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  3. RELAP5-3D Code Validation for RBMK Phenomena

    SciTech Connect

    Fisher, James Ebberly

    1999-09-01

    The RELAP5-3D thermal-hydraulic code was assessed against Japanese Safety Experiment Loop (SEL) and Heat Transfer Loop (HTL) tests. These tests were chosen because the phenomena present are applicable to analyses of Russian RBMK reactor designs. The assessment cases included parallel channel flow fluctuation tests at reduced and normal water levels, a channel inlet pipe rupture test, and a high power, density wave oscillation test. The results showed that RELAP5-3D has the capability to adequately represent these RBMK-related phenomena.

  4. RELAP5-3D code validation for RBMK phenomena

    SciTech Connect

    Fisher, J.E.

    1999-09-01

    The RELAP5-3D thermal-hydraulic code was assessed against Japanese Safety Experiment Loop (SEL) and Heat Transfer Loop (HTL) tests. These tests were chosen because the phenomena present are applicable to analyses of Russian RBMK reactor designs. The assessment cases included parallel channel flow fluctuation tests at reduced and normal water levels, a channel inlet pipe rupture test, and a high power, density wave oscillation test. The results showed that RELAP5-3D has the capability to adequately represent these RBMK-related phenomena.

  5. Video coding with dynamic background

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.

  6. VISRAD, 3-D Target Design and Radiation Simulation Code

    NASA Astrophysics Data System (ADS)

    Golovkina, Viktoriya; Macfarlane, Joseph; Golovkin, Igor; Kulkarni, Subodh

    2014-10-01

    The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, LMJ, Z, and PLX. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. We will discuss recent improvements to the software package and plans for future developments.

  7. VISRAD, 3-D Target Design and Radiation Simulation Code

    NASA Astrophysics Data System (ADS)

    Li, Yingjie; Macfarlane, Joseph; Golovkin, Igor

    2015-11-01

    The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, LMJ, Z, and PLX. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. We will discuss recent improvements to the software package and plans for future developments.

  8. 3D Data Assimilation using VERB Diffusion Code

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Kondrashov, D. A.; Kellerman, A. C.; Subbotin, D.

    2012-12-01

    Significant progress has been done in recent years in application of the data assimilation tools to the radiation belt research. Previous studies concentrated on the analysis of radial profiles of phase space density using multi-satellite measurements and radial transport models. In this study we present analysis of the 3D phase space density using the VERB-3D code blended with CRRES observations by means of operator-splitting Kalman filtering. Assimilation electron fluxes at various energies and pitch-angles into the model allows us to utilize a vast amount of data including information on pitch-angle distributions and radial energy spectra. 3D data assimilation of the radiation belts allows us to differentiate between various acceleration and loss mechanisms. We present reanalysis of the radiation belts and find tell-tale signatures of various physical processes.

  9. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  10. Beam Optics Analysis — An Advanced 3D Trajectory Code

    NASA Astrophysics Data System (ADS)

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-01

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  11. Streamlining of the RELAP5-3D Code

    SciTech Connect

    Mesina, George L; Hykes, Joshua; Guillen, Donna Post

    2007-11-01

    RELAP5-3D is widely used by the nuclear community to simulate general thermal hydraulic systems and has proven to be so versatile that the spectrum of transient two-phase problems that can be analyzed has increased substantially over time. To accommodate the many new types of problems that are analyzed by RELAP5-3D, both the physics and numerical methods of the code have been continuously improved. In the area of computational methods and mathematical techniques, many upgrades and improvements have been made decrease code run time and increase solution accuracy. These include vectorization, parallelization, use of improved equation solvers for thermal hydraulics and neutron kinetics, and incorporation of improved library utilities. In the area of applied nuclear engineering, expanded capabilities include boron and level tracking models, radiation/conduction enclosure model, feedwater heater and compressor components, fluids and corresponding correlations for modeling Generation IV reactor designs, and coupling to computational fluid dynamics solvers. Ongoing and proposed future developments include improvements to the two-phase pump model, conversion to FORTRAN 90, and coupling to more computer programs. This paper summarizes the general improvements made to RELAP5-3D, with an emphasis on streamlining the code infrastructure for improved maintenance and development. With all these past, present and planned developments, it is necessary to modify the code infrastructure to incorporate modifications in a consistent and maintainable manner. Modifying a complex code such as RELAP5-3D to incorporate new models, upgrade numerics, and optimize existing code becomes more difficult as the code grows larger. The difficulty of this as well as the chance of introducing errors is significantly reduced when the code is structured. To streamline the code into a structured program, a commercial restructuring tool, FOR_STRUCT, was applied to the RELAP5-3D source files. The

  12. Towards a 3D Space Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.

  13. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  14. 3D Finite Element Trajectory Code with Adaptive Meshing

    NASA Astrophysics Data System (ADS)

    Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien

    2004-11-01

    Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.

  15. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  16. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    SciTech Connect

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available.

  17. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  18. FARGO3D: A New GPU-oriented MHD Code

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Pablo; Masset, Frédéric S.

    2016-03-01

    We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on the physics of protoplanetary disks and planet-disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite-difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on either graphical processing units (GPUs) or central processing units (CPUs), achieving large speed-up with respect to CPU cores. We describe our implementation choices, which allow a user with no prior knowledge of GPU programming to develop new routines for CPUs, and have them translated automatically for GPUs.

  19. MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE

    NASA Technical Reports Server (NTRS)

    Shaeffer, J. F.

    1994-01-01

    compare surface-current distribution due to various initial excitation directions or electric field orientations. The program can accept up to 50 planes of field data consisting of a grid of 100 by 100 field points. These planes of data are user selectable and can be viewed individually or concurrently. With these preset limits, the program requires 55 megabytes of core memory to run. These limits can be changed in the header files to accommodate the available core memory of an individual workstation. An estimate of memory required can be made as follows: approximate memory in bytes equals (number of nodes times number of surfaces times 14 variables times bytes per word, typically 4 bytes per floating point) plus (number of field planes times number of nodes per plane times 21 variables times bytes per word). This gives the approximate memory size required to store the field and surface-current data. The total memory size is approximately 400,000 bytes plus the data memory size. The animation calculations are performed in real time at any user set time step. For Silicon Graphics Workstations that have multiple processors, this program has been optimized to perform these calculations on multiple processors to increase animation rates. The optimized program uses the SGI PFA (Power FORTRAN Accelerator) library. On single processor machines, the parallelization directives are seen as comments to the program and will have no effect on compilation or execution. MOM3D and EM-ANIMATE are written in FORTRAN 77 for interactive or batch execution on SGI series computers running IRIX 3.0 or later. The RAM requirements for these programs vary with the size of the problem being solved. A minimum of 30Mb of RAM is required for execution of EM-ANIMATE; however, the code may be modified to accommodate the available memory of an individual workstation. For EM-ANIMATE, twenty-four bit, double-buffered color capability is suggested, but not required. Sample executables and sample input and

  20. Improving calibration of 3-D video oculography systems.

    PubMed

    Schreiber, Kai; Haslwanter, Thomas

    2004-04-01

    Eye movement recordings with video-based techniques have become very popular, as long as they are restricted to the horizontal and vertical movements of the eye. Reliable measurement of the torsional component of eye movements, which is especially important in the diagnosis and investigation of pathologies, has remained a coveted goal. One of the main reasons is unresolved technical difficulties in the analysis of video-based images of the eye. Based on simulations, we present solutions to two of the primary problems: a robust and reliable calibration of horizontal and vertical eye movement recordings, and the extraction of suitable iris patterns for the determination of the torsional eye position component.

  1. RHALE: A 3-D MMALE code for unstructured grids

    SciTech Connect

    Peery, J.S.; Budge, K.G.; Wong, M.K.W.; Trucano, T.G.

    1993-08-01

    This paper describes RHALE, a multi-material arbitrary Lagrangian-Eulerian (MMALE) shock physics code. RHALE is the successor to CTH, Sandia`s 3-D Eulerian shock physics code, and will be capable of solving problems that CTH cannot adequately address. We discuss the Lagrangian solid mechanics capabilities of RHALE, which include arbitrary mesh connectivity, superior artificial viscosity, and improved material models. We discuss the MMALE algorithms that have been extended for arbitrary grids in both two- and three-dimensions. The MMALE addition to RHALE provides the accuracy of a Lagrangian code while allowing a calculation to proceed under very large material distortions. Coupling an arbitrary quadrilateral or hexahedral grid to the MMALE solution facilitates modeling of complex shapes with a greatly reduced number of computational cells. RHALE allows regions of a problem to be modeled with Lagrangian, Eulerian or ALE meshes. In addition, regions can switch from Lagrangian to ALE to Eulerian based on user input or mesh distortion. For ALE meshes, new node locations are determined with a variety of element based equipotential schemes. Element quantities are advected with donor, van Leer, or Super-B algorithms. Nodal quantities are advected with the second order SHALE or HIS algorithms. Material interfaces are determined with a modified Young`s high resolution interface tracker or the SLIC algorithm. RHALE has been used to model many problems of interest to the mechanics, hypervelocity impact, and shock physics communities. Results of a sampling of these problems are presented in this paper.

  2. Variational Symplectic Orbit Code in 3-D Tokamak Geometry

    NASA Astrophysics Data System (ADS)

    Ellison, Charles; Qin, Hong; Tang, William M.

    2011-10-01

    Since advanced tokamak experiments - including ITER - are long-pulse systems, it is important to develop accurate numerical methods to track plasma dynamics over an extended temporal period. When attempting to model the motion of individual particles, standard integrators (e.g. 4th order Runge-Kutta) discretize the differential equations of motion - but do not possess desired properties such as energy conservation. The variational symplectic integrator adopts instead a different approach via minimizing the action of the guiding center motion to determine iteration rules. Consequently, the Lagrangian symplectic structure is conserved, and the numerical energy error is bounded by a small number for all time-steps. In previous work, the theoretical basis for this method was introduced, but the implementation was for 2-D geometry. To address realistic experimental scenarios, the variational symplectic integrator has been implemented for 3-D tokamak geometry for the first time. Sample results will be presented and compared with those from standard Runge-Kutta-based 3-D tokamak orbit codes. This work was supported by the DOE contract # DE-AC02-09CH11466 and the DOE FES Fellowship.

  3. Code System to Simulate 3D Tracer Dispersion in Atmosphere.

    2002-01-25

    Version 00 SHREDI is a shielding code system which executes removal-diffusion computations for bi-dimensional shields in r-z or x-y geometries. It may also deal with monodimensional problems (infinitely high cylinders or slabs). MESYST can simulate 3D tracer dispersion in the atmosphere. Three programs are part of this system: CRE_TOPO prepares the terrain data for MESYST. NOABL calculates three-dimensional free divergence windfields over complex terrain. PAS computes tracer concentrations and depositions on a given domain. Themore » purpose of this work is to develop a reliable simulation tool for pollutant atmospheric dispersion, which gives a realistic approach and allows one to compute the pollutant concentrations over complex terrains with good accuracy. The factional brownian model, which furnishes more accurate concentration values, is introduced to calculate pollutant atmospheric dispersion. The model was validated on SIESTA international experiments.« less

  4. 3D Convection-pulsation Simulations with the HERACLES Code

    NASA Astrophysics Data System (ADS)

    Felix, S.; Audit, E.; Dintrans, B.

    2015-10-01

    We present 3D simulations of the coupling between surface convection and pulsations due to the κ-mechanism in classical Cepheids of the red edge of Hertzsprung-Russell diagram's instability strip. We show that 3D convection is less powerful than 2D convection and does not quench the radiative pulsations, leading to an efficient 3D κ-mechanism. Thus, the 3D instability strip is closer to the observed one than the 1D or 2D were.

  5. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  6. Does training with 3D videos improve decision-making in team invasion sports?

    PubMed

    Hohmann, Tanja; Obelöer, Hilke; Schlapkohl, Nele; Raab, Markus

    2016-01-01

    We examined the effectiveness of video-based decision training in national youth handball teams. Extending previous research, we tested in Study 1 whether a three-dimensional (3D) video training group would outperform a two-dimensional (2D) group. In Study 2, a 3D training group was compared to a control group and a group trained with a traditional tactic board. In both studies, training duration was 6 weeks. Performance was measured in a pre- to post-retention design. The tests consisted of a decision-making task measuring quality of decisions (first and best option) and decision time (time for first and best option). The results of Study 1 showed learning effects and revealed that the 3D video group made faster first-option choices than the 2D group, but differences in the quality of options were not pronounced. The results of Study 2 revealed learning effects for both training groups compared to the control group, and faster choices in the 3D group compared to both other groups. Together, the results show that 3D video training is the most useful tool for improving choices in handball, but only in reference to decision time and not decision quality. We discuss the usefulness of a 3D video tool for training of decision-making skills outside the laboratory or gym.

  7. Learning Dictionaries of Sparse Codes of 3D Movements of Body Joints for Real-Time Human Activity Understanding

    PubMed Central

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850

  8. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  9. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    SciTech Connect

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  10. 3-D localization of gamma ray sources with coded apertures for medical applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Karafasoulis, K.; Potiriadis, C.; Lambropoulos, C. P.

    2015-09-01

    Several small gamma cameras for radioguided surgery using CdTe or CdZnTe have parallel or pinhole collimators. Coded aperture imaging is a well-known method for gamma ray source directional identification, applied in astrophysics mainly. The increase in efficiency due to the substitution of the collimators by the coded masks renders the method attractive for gamma probes used in radioguided surgery. We have constructed and operationally verified a setup consisting of two CdTe gamma cameras with Modified Uniform Redundant Array (MURA) coded aperture masks of rank 7 and 19 and a video camera. The 3-D position of point-like radioactive sources is estimated via triangulation using decoded images acquired by the gamma cameras. We have also developed code for both fast and detailed simulations and we have verified the agreement between experimental results and simulations. In this paper we present a simulation study for the spatial localization of two point sources using coded aperture masks with rank 7 and 19.

  11. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel A. Lazerson, S. Sakakibara and Y. Suzuki

    2013-03-12

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The code is validated against a vacuum shot on the Large Helical Device (LHD) where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the LHD.

  12. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel Aaron Lazerson

    2012-07-27

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The codes is validated against a vacuum shot on the Large Helical Device where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the Large Helical Device (LHD).

  13. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  14. A 3D-Video-Based Computerized Analysis of Social and Sexual Interactions in Rats

    PubMed Central

    Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238

  15. A 3D-video-based computerized analysis of social and sexual interactions in rats.

    PubMed

    Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238

  16. Rapid 3D video/laser sensing and digital archiving with immediate on-scene feedback for 3D crime scene/mass disaster data collection and reconstruction

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.

    1996-02-01

    We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.

  17. Effect of 3D animation videos over 2D video projections in periodontal health education among dental students

    PubMed Central

    Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha

    2015-01-01

    Background: There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. Materials and Methods: 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. Results: A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p<0.001) at baseline, immediate and after 1 month. At baseline, all the clinical parameters in the both the groups were similar and showed a significant reduction (p<0.001)p after 1 month, whereas no significant difference was noticed post intervention between the groups. Conclusion: 3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes. PMID:26759805

  18. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  19. Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.

    2015-01-01

    The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.

  20. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  1. Real-time 3D visualization of volumetric video motion sensor data

    SciTech Connect

    Carlson, J.; Stansfield, S.; Shawver, D.; Flachs, G.M.; Jordan, J.B.; Bao, Z.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to be immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.

  2. 3D filtering technique in presence of additive noise in color videos implemented on DSP

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Palacios, Alfredo

    2014-05-01

    A filtering method for color videos contaminated by additive noise is presented. The proposed framework employs three filtering stages: spatial similarity filtering, neighboring frame denoising, and spatial post-processing smoothing. The difference with other state-of- the-art filtering methods, is that this approach, based on fuzzy logic, analyses basic and related gradient values between neighboring pixels into a 7 fi 7 sliding window in the vicinity of a central pixel in each of the RGB channels. Following, the similarity measures between the analogous pixels in the color bands are taken into account during the denoising. Next, two neighboring video frames are analyzed together estimating local motions between the frames using block matching procedure. In the final stage, the edges and smoothed areas are processed differently in a current frame during the post-processing filtering. Numerous simulations results confirm that this 3D fuzzy filter perform better than other state-of-the- art methods, such as: 3D-LLMMSE, WMVCE, RFMDAF, FDARTF G, VBM3D and NLM, in terms of objective criteria (PSNR, MAE, NCD and SSIM) as well as subjective perception via human vision system in the different color videos. An efficiency analysis of the designed and other mentioned filters have been performed on the DSPs TMS320 DM642 and TMS320DM648 by Texas Instruments through MATLAB and Simulink module showing that the novel 3D fuzzy filter can be used in real-time processing applications.

  3. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  4. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  5. MOM3D method of moments code theory manual

    NASA Technical Reports Server (NTRS)

    Shaeffer, John F.

    1992-01-01

    MOM3D is a FORTRAN algorithm that solves Maxwell's equations as expressed via the electric field integral equation for the electromagnetic response of open or closed three dimensional surfaces modeled with triangle patches. Two joined triangles (couples) form the vector current unknowns for the surface. Boundary conditions are for perfectly conducting or resistive surfaces. The impedance matrix represents the fundamental electromagnetic interaction of the body with itself. A variety of electromagnetic analysis options are possible once the impedance matrix is computed including backscatter radar cross section (RCS), bistatic RCS, antenna pattern prediction for user specified body voltage excitation ports, RCS image projection showing RCS scattering center locations, surface currents excited on the body as induced by specified plane wave excitation, and near field computation for the electric field on or near the body.

  6. 3D unstructured-mesh radiation transport codes

    SciTech Connect

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options: $S{_}n$ (discrete-ordinates), $P{_}n$ (spherical harmonics), and $SP{_}n$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $S{_}n$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.

  7. 3D MPEG-2 video transmission over broadband network and broadcast channels

    NASA Astrophysics Data System (ADS)

    Gagnon, Gilles; Subramaniam, Suganthan; Vincent, Andre

    2001-06-01

    This paper explores the transmission of MPEG-2 compressed stereoscopic (3-D) video over broadband networks and digital television (DTV) broadcast channels. A system has been developed to perform 3-D (stereoscopic) MPEG-2 video encoding, transmission and decoding over broadband networks in real- time. Such a system can benefit applications where a depiction of the relative positions of objects in 3-dimensional space is critical, by providing visual cues along the sight axis. Applications such as tele-medicine, remote surveillance, tele- education, entertainment and others could benefit from such a system since it conveys an added viewing experience. For simplicity and cost efficiency the system is kept as simple as possible while offering a certain degree of control over the encoding and decoding platforms. Data exchange is done with TCP/IP for control between the server and client and with UDP/IP for the MPEG-2 transport streams delivered to the client. Parameters such as encoding rate can be set independently for the left and right viewing channels to satisfy network bandwidth restrictions, while maintaining satisfactory quality. Using this system, transmission of stereoscopic MPEG-2 transport streams (video and audio) has been performed over a 155 Mbps ATM network shared with other video transactions between server and clients. Preliminary results have shown that the system is reasonably robust to network impairments making it useable in relatively loaded networks. An innovative technique for broadcasting Standard Definition Television 3-D video using an ATSC compatible encoding and broadcasting system is also presented. This technique requires a simple video multiplexer before the ATSC encoding process, and a slight modification at the receiver after the ATSC decoding.

  8. 3D visualization for the MARS14 Code

    SciTech Connect

    Rzepecki, Jaroslaw P.; Kostin, Mikhail A; Mokhov, Nikolai V.

    2003-01-23

    A new three-dimensional visualization engine has been developed for the MARS14 code system. It is based on the OPENINVENTOR graphics library and integrated with the MARS built-in two-dimensional Graphical-User Interface, MARS-GUI-SLICE. The integrated package allows thorough checking of complex geometry systems and their fragments, materials, magnetic fields, particle tracks along with a visualization of calculated 2-D histograms. The algorithms and their optimization are described for two geometry classes along with examples in accelerator and detector applications.

  9. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity. PMID:24991752

  10. Coarse integral holography approach for real 3D color video displays.

    PubMed

    Chen, J S; Smithwick, Q Y J; Chu, D P

    2016-03-21

    A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components. PMID:27136858

  11. Coarse integral holography approach for real 3D color video displays.

    PubMed

    Chen, J S; Smithwick, Q Y J; Chu, D P

    2016-03-21

    A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components.

  12. Highly accurate video coordinate generation for automatic 3-D trajectory calculation

    NASA Astrophysics Data System (ADS)

    Macleod, A.; Morris, Julian R. W.; Lyster, M.

    1990-08-01

    Most TV-based motion analysis systems, including the original version of 1/ICON, produce 3D coordinates by combining pre-tracked 2D trajectories from each camera. The latest version of the system, VICON-VX, uses totally automatic 3D trajectory calculation using the Geometric Self Identification (GSI) technique. This is achieved by matching unsorted 2D image coordinates from all cameras, looking for intersecting marker 'rays', and matching intersections into 3D trajectories. Effective GSI, with low false-positive intersection rates is only possible with highly accurate 2D data, produced by stable, high-resolution coordinate generators, and incorporating appropriate compensation for lens distortions. Data capture software and hardware have been completely redesigned to achieve this accuracy, together with higher throughput rates and better resistance to errors. In addition, a new ADC facility has been incorporated to allow very high speed analog data acquisition, synchronised with video measurements.

  13. ROAR: A 3-D tethered rocket simulation code

    SciTech Connect

    York, A.R. II; Ludwigsen, J.S.

    1992-04-01

    A high-velocity impact testing technique, utilizing a tethered rocket, is being developed at Sandia National Laboratories. The technique involves tethering a rocket assembly to a pivot location and flying it in a semicircular trajectory to deliver the rocket and payload to an impact target location. Integral to developing this testing technique is the parallel development of accurate simulation models. An operational computer code, called ROAR (Rocket-on-a-Rope), has been developed to simulate the three-dimensional transient dynamic behavior of the tether and motor/payload assembly. This report presents a discussion of the parameters modeled, the governing set of equations, the through-time integration scheme, and the input required to set up a model. Also included is a sample problem and a comparison with experimental results.

  14. Video lensfree microscopy of 2D and 3D culture of cells

    NASA Astrophysics Data System (ADS)

    Allier, C. P.; Vinjimore Kesavan, S.; Coutard, J.-G.; Cioni, O.; Momey, F.; Navarro, F.; Menneteau, M.; Chalmond, B.; Obeid, P.; Haguet, V.; David-Watine, B.; Dubrulle, N.; Shorte, S.; van der Sanden, B.; Di Natale, C.; Hamard, L.; Wion, D.; Dolega, M. E.; Picollet-D'hahan, N.; Gidrol, X.; Dinten, J.-M.

    2014-03-01

    Innovative imaging methods are continuously developed to investigate the function of biological systems at the microscopic scale. As an alternative to advanced cell microscopy techniques, we are developing lensfree video microscopy that opens new ranges of capabilities, in particular at the mesoscopic level. Lensfree video microscopy allows the observation of a cell culture in an incubator over a very large field of view (24 mm2) for extended periods of time. As a result, a large set of comprehensive data can be gathered with strong statistics, both in space and time. Video lensfree microscopy can capture images of cells cultured in various physical environments. We emphasize on two different case studies: the quantitative analysis of the spontaneous network formation of HUVEC endothelial cells, and by coupling lensfree microscopy with 3D cell culture in the study of epithelial tissue morphogenesis. In summary, we demonstrate that lensfree video microscopy is a powerful tool to conduct cell assays in 2D and 3D culture experiments. The applications are in the realms of fundamental biology, tissue regeneration, drug development and toxicology studies.

  15. Extending ALE3D, an Arbitrarily Connected hexahedral 3D Code, to Very Large Problem Size (U)

    SciTech Connect

    Nichols, A L

    2010-12-15

    As the number of compute units increases on the ASC computers, the prospect of running previously unimaginably large problems is becoming a reality. In an arbitrarily connected 3D finite element code, like ALE3D, one must provide a unique identification number for every node, element, face, and edge. This is required for a number of reasons, including defining the global connectivity array required for domain decomposition, identifying appropriate communication patterns after domain decomposition, and determining the appropriate load locations for implicit solvers, for example. In most codes, the unique identification number is defined as a 32-bit integer. Thus the maximum value available is 231, or roughly 2.1 billion. For a 3D geometry consisting of arbitrarily connected hexahedral elements, there are approximately 3 faces for every element, and 3 edges for every node. Since the nodes and faces need id numbers, using 32-bit integers puts a hard limit on the number of elements in a problem at roughly 700 million. The first solution to this problem would be to replace 32-bit signed integers with 32-bit unsigned integers. This would increase the maximum size of a problem by a factor of 2. This provides some head room, but almost certainly not one that will last long. Another solution would be to replace all 32-bit int declarations with 64-bit long long declarations. (long is either a 32-bit or a 64-bit integer, depending on the OS). The problem with this approach is that there are only a few arrays that actually need to extended size, and thus this would increase the size of the problem unnecessarily. In a future computing environment where CPUs are abundant but memory relatively scarce, this is probably the wrong approach. Based on these considerations, we have chosen to replace only the global identifiers with the appropriate 64-bit integer. The problem with this approach is finding all the places where data that is specified as a 32-bit integer needs to be

  16. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    1998-01-13

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  17. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation.

  18. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    Jacob, J. Augustin; Kumar, N. Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  19. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  20. Wall-touching kink mode calculations with the M3D code

    SciTech Connect

    Breslau, J. A. Bhattacharjee, A.

    2015-06-15

    This paper seeks to address a controversy regarding the applicability of the 3D nonlinear extended MHD code M3D [W. Park et al., Phys. Plasmas 6, 1796 (1999)] and similar codes to calculations of the electromagnetic interaction of a disrupting tokamak plasma with the surrounding vessel structures. M3D is applied to a simple test problem involving an external kink mode in an ideal cylindrical plasma, used also by the Disruption Simulation Code (DSC) as a model case for illustrating the nature of transient vessel currents during a major disruption. While comparison of the results with those of the DSC is complicated by effects arising from the higher dimensionality and complexity of M3D, we verify that M3D is capable of reproducing both the correct saturation behavior of the free boundary kink and the “Hiro” currents arising when the kink interacts with a conducting tile surface interior to the ideal wall.

  1. ROI-preserving 3D video compression method utilizing depth information

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan

    2015-09-01

    Efficiently transmitting the extra information of three dimensional (3D) video is becoming a key issue of the development of 3DTV. 2D plus depth format not only occupies the smaller bandwidth and is compatible transmission under the condition of the existing channel, but also can provide technique support for advanced 3D video compression in some extend. This paper proposes an ROI-preserving compression scheme to further improve the visual quality at a limited bit rate. According to the connection between the focus of Human Visual System (HVS) and depth information, region of interest (ROI) can be automatically selected via depth map progressing. The main improvement from common method is that a meanshift based segmentation is executed to the depth map before foreground ROI selection to keep the integrity of scene. Besides, the sensitive areas along the edges are also protected. The Spatio-temporal filtering adapting to H.264 is used to the non-ROI of both 2D video and depth map before compression. Experiments indicate that, the ROI extracted by this method is more undamaged and according with subjective feeling, and the proposed method can keep the key high-frequency information more effectively while the bit rate is reduced.

  2. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  3. Virtual bronchoscopic approach for combining 3D CT and endoscopic video

    NASA Astrophysics Data System (ADS)

    Sherbondy, Anthony J.; Kiraly, Atilla P.; Austin, Allen L.; Helferty, James P.; Wan, Shu-Yen; Turlington, Janice Z.; Yang, Tao; Zhang, Chao; Hoffman, Eric A.; McLennan, Geoffrey; Higgins, William E.

    2000-04-01

    To improve the care of lung-cancer patients, we are devising a diagnostic paradigm that ties together three-dimensional (3D) high-resolution computed-tomographic (CT) imaging and bronchoscopy. The system expands upon the new concept of virtual endoscopy that has seen recent application to the chest, colon, and other anatomical regions. Our approach applies computer-graphics and image-processing tools to the analysis of 3D CT chest images and complementary bronchoscopic video. It assumes a two-stage assessment of a lung-cancer patient. During Stage 1 (CT assessment), the physician interacts with a number of visual and quantitative tools to evaluate the patient's 'virtual anatomy' (3D CT scan). Automatic analysis gives navigation paths through major airways and to pre-selected suspect sites. These paths provide useful guidance during Stage-1 CT assessment. While interacting with these paths and other software tools, the user builds a multimedia Case Study, capturing telling snapshot views, movies, and quantitative data. The Case Study contains a report on the CT scan and also provides planning information for subsequent bronchoscopic evaluation. During Stage 2 (bronchoscopy), the physician uses (1) the original CT data, (2) software graphical tools, (3) the Case Study, and (4) a standard bronchoscopy suite to have an augmented vision for bronchoscopic assessment and treatment. To use the two data sources (CT and bronchoscopic video) simultaneously, they must be registered. We perform this registration using both manual interaction and an automated matching approach based on mutual information. We demonstrate our overall progress to date using human CT cases and CT-video from a bronchoscopy- training device.

  4. Three dimensional template matching segmentation method for motile cells in 3D+t video sequences.

    PubMed

    Pimentel, J A; Corkidi, G

    2010-01-01

    In this work, we describe a segmentation cell method oriented to deal with experimental data obtained from 3D+t microscopical volumes. The proposed segmentation technique takes advantage of the pattern of appearances exhibited by the objects (cells) from different focal planes, as a result of the object translucent properties and its interaction with light. This information allows us to discriminate between cells and artifacts (dust an other) with equivalent size and shape that are present in the biological preparation. Using a simple correlation criteria, the method matches a 3D video template (extracted from a sample of cells) with the motile cells contained into the biological sample, obtaining a high rate of true positives while discarding artifacts. In this work, our analysis is focused on sea urchin spermatozoa cells but is applicable to many other microscopical structures having the same optical properties. PMID:21096252

  5. Development of the PARVMEC Code for Rapid Analysis of 3D MHD Equilibrium

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Cianciosa, Mark; Wingen, Andreas; Unterberg, Ezekiel; Wilcox, Robert; ORNL Collaboration

    2015-11-01

    The VMEC three-dimensional (3D) MHD equilibrium has been used extensively for designing stellarator experiments and analyzing experimental data in such strongly 3D systems. Recent applications of VMEC include 2D systems such as tokamaks (in particular, the D3D experiment), where application of very small (delB/B ~ 10-3) 3D resonant magnetic field perturbations render the underlying assumption of axisymmetry invalid. In order to facilitate the rapid analysis of such equilibria (for example, for reconstruction purposes), we have undertaken the task of parallelizing the VMEC code (PARVMEC) to produce a scalable and temporally rapidly convergent equilibrium code for use on parallel distributed memory platforms. The parallelization task naturally splits into three distinct parts 1) radial surfaces in the fixed-boundary part of the calculation; 2) two 2D angular meshes needed to compute the Green's function integrals over the plasma boundary for the free-boundary part of the code; and 3) block tridiagonal matrix needed to compute the full (3D) pre-conditioner near the final equilibrium state. Preliminary results show that scalability is achieved for tasks 1 and 3, with task 2 still nearing completion. The impact of this work on the rapid reconstruction of D3D plasmas using PARVMEC in the V3FIT code will be discussed. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  6. INS3D: An incompressible Navier-Stokes code in generalized three-dimensional coordinates

    NASA Technical Reports Server (NTRS)

    Rogers, S. E.; Kwak, D.; Chang, J. L. C.

    1987-01-01

    The operation of the INS3D code, which computes steady-state solutions to the incompressible Navier-Stokes equations, is described. The flow solver utilizes a pseudocompressibility approach combined with an approximate factorization scheme. This manual describes key operating features to orient new users. This includes the organization of the code, description of the input parameters, description of each subroutine, and sample problems. Details for more extended operations, including possible code modifications, are given in the appendix.

  7. Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern.

    PubMed

    Sagawa, Ryusuke; Furukawa, Ryo; Kawasaki, Hiroshi

    2014-09-01

    Dense 3D reconstruction of fast moving objects could contribute to various applications such as body structure analysis, accident avoidance, and so on. In this paper, we propose a technique based on a one-shot scanning method, which reconstructs 3D shapes for each frame of a high frame-rate video capturing the scenes projected by a static pattern. To avoid instability of image processing, we restrict the number of colors used in the pattern to less than two. The proposed technique comprises (1) an efficient algorithm to eliminate ambiguity of projected parallel-line patterns by using intersection points, (2) a batch reconstruction algorithm of multiple frames by using spatio-temporal constraints, and (3) an efficient detection method of color-encoded grid pattern based on de Bruijn sequence. In the experiments, the line detection algorithm worked effectively and the dense reconstruction algorithm produces accurate and robust results. We also show the improved results by using temporal constraints. Finally, the dense reconstructions of fast moving objects in a high frame-rate video are presented. PMID:26352228

  8. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  9. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  10. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    PubMed

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  11. Monitoring an eruption fissure in 3D: video recording, particle image velocimetry and dynamics

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2015-04-01

    The processes during an eruption are very complex. To get a better understanding several parameters are measured. One of the measured parameters is the velocity of particles and patterns, as ash and emitted magma, and of the volcano itself. The resulting velocity field provides insights into the dynamics of a vent. Here we test our algorithm for 3 dimensional velocity fields on videos of the second fissure eruption of Bárdarbunga 2014. There we acquired videos from lava fountains of the main fissure with 2 high speed cameras with small angles between the cameras. Additionally we test the algorithm on videos from the geyser Strokkur, where we had 3 cameras and larger angles between the cameras. The velocity is calculated by a correlation in the Fourier space of contiguous images. Considering that we only have the velocity field of the surface smaller angles result in a better resolution of the existing velocity field in the near field. For general movements also larger angles can be useful, e.g. to get the direction, height and velocity of eruption clouds. In summary, it can be stated that 3D velocimetry can be used for several application and with different setup due to the application.

  12. User's manual for PELE3D: a computer code for three-dimensional incompressible fluid dynamics

    SciTech Connect

    McMaster, W H

    1982-05-07

    The PELE3D code is a three-dimensional semi-implicit Eulerian hydrodynamics computer program for the solution of incompressible fluid flow coupled to a structure. The fluid and coupling algorithms have been adapted from the previously developed two-dimensional code PELE-IC. The PELE3D code is written in both plane and cylindrical coordinates. The coupling algorithm is general enough to handle a variety of structural shapes. The free surface algorithm is able to accommodate a top surface and several independent bubbles. The code is in a developmental status since all the intended options have not been fully implemented and tested. Development of this code ended in 1980 upon termination of the contract with the Nuclear Regulatory Commission.

  13. Three-dimensional parallel UNIPIC-3D code for simulations of high-power microwave devices

    SciTech Connect

    Wang Jianguo; Chen Zaigao; Wang Yue; Zhang Dianhui; Qiao Hailiang; Fu Meiyan; Yuan Yuan; Liu Chunliang; Li Yongdong; Wang Hongguang

    2010-07-15

    This paper introduces a self-developed, three-dimensional parallel fully electromagnetic particle simulation code UNIPIC-3D. In this code, the electromagnetic fields are updated using the second-order, finite-difference time-domain method, and the particles are moved using the relativistic Newton-Lorentz force equation. The electromagnetic field and particles are coupled through the current term in Maxwell's equations. Two numerical examples are used to verify the algorithms adopted in this code, numerical results agree well with theoretical ones. This code can be used to simulate the high-power microwave (HPM) devices, such as the relativistic backward wave oscillator, coaxial vircator, and magnetically insulated line oscillator, etc. UNIPIC-3D is written in the object-oriented C++ language and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the complex geometric structures of the simulated HPM devices, which can be automatically meshed by UNIPIC-3D code. This code has a powerful postprocessor which can display the electric field, magnetic field, current, voltage, power, spectrum, momentum of particles, etc. For the sake of comparison, the results computed by using the two-and-a-half-dimensional UNIPIC code are also provided for the same parameters of HPM devices, the numerical results computed from these two codes agree well with each other.

  14. Subjective and Objective Video Quality Assessment of 3D Synthesized Views With Texture/Depth Compression Distortion.

    PubMed

    Liu, Xiangkai; Zhang, Yun; Hu, Sudeng; Kwong, Sam; Kuo, C-C Jay; Peng, Qiang

    2015-12-01

    The quality assessment for synthesized video with texture/depth compression distortion is important for the design, optimization, and evaluation of the multi-view video plus depth (MVD)-based 3D video system. In this paper, the subjective and objective studies for synthesized view assessment are both conducted. First, a synthesized video quality database with texture/depth compression distortion is presented with subjective scores given by 56 subjects. The 140 videos are synthesized from ten MVD sequences with different texture/depth quantization combinations. Second, a full reference objective video quality assessment (VQA) method is proposed concerning about the annoying temporal flicker distortion and the change of spatio-temporal activity in the synthesized video. The proposed VQA algorithm has a good performance evaluated on the entire synthesized video quality database, and is particularly prominent on the subsets which have significant temporal flicker distortion induced by depth compression and view synthesis process. PMID:26292342

  15. Analysis of EEG signals regularity in adults during video game play in 2D and 3D.

    PubMed

    Khairuddin, Hamizah R; Malik, Aamir S; Mumtaz, Wajid; Kamel, Nidal; Xia, Likun

    2013-01-01

    Video games have long been part of the entertainment industry. Nonetheless, it is not well known how video games can affect us with the advancement of 3D technology. The purpose of this study is to investigate the EEG signals regularity when playing video games in 2D and 3D modes. A total of 29 healthy subjects (24 male, 5 female) with mean age of 21.79 (1.63) years participated. Subjects were asked to play a car racing video game in three different modes (2D, 3D passive and 3D active). In 3D passive mode, subjects needed to wear a passive polarized glasses (cinema type) while for 3D active, an active shutter glasses was used. Scalp EEG data was recorded during game play using 19-channel EEG machine and linked ear was used as reference. After data were pre-processed, the signal irregularity for all conditions was computed. Two parameters were used to measure signal complexity for time series data: i) Hjorth-Complexity and ii) Composite Permutation Entropy Index (CPEI). Based on these two parameters, our results showed that the complexity level increased from eyes closed to eyes open condition; and further increased in the case of 3D as compared to 2D game play. PMID:24110125

  16. Analysis of EEG signals regularity in adults during video game play in 2D and 3D.

    PubMed

    Khairuddin, Hamizah R; Malik, Aamir S; Mumtaz, Wajid; Kamel, Nidal; Xia, Likun

    2013-01-01

    Video games have long been part of the entertainment industry. Nonetheless, it is not well known how video games can affect us with the advancement of 3D technology. The purpose of this study is to investigate the EEG signals regularity when playing video games in 2D and 3D modes. A total of 29 healthy subjects (24 male, 5 female) with mean age of 21.79 (1.63) years participated. Subjects were asked to play a car racing video game in three different modes (2D, 3D passive and 3D active). In 3D passive mode, subjects needed to wear a passive polarized glasses (cinema type) while for 3D active, an active shutter glasses was used. Scalp EEG data was recorded during game play using 19-channel EEG machine and linked ear was used as reference. After data were pre-processed, the signal irregularity for all conditions was computed. Two parameters were used to measure signal complexity for time series data: i) Hjorth-Complexity and ii) Composite Permutation Entropy Index (CPEI). Based on these two parameters, our results showed that the complexity level increased from eyes closed to eyes open condition; and further increased in the case of 3D as compared to 2D game play.

  17. Multitasking the INS3D-LU code on the Cray Y-MP

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Yoon, Seokkwan

    1991-01-01

    This paper presents the results of multitasking the INS3D-LU code on eight processors. The code is a full Navier-Stokes solver for incompressible fluid in three dimensional generalized coordinates using a lower-upper symmetric-Gauss-Seidel implicit scheme. This code has been fully vectorized on oblique planes of sweep and parallelized using autotasking with some directives and minor modifications. The timing results for five grid sizes are presented and analyzed. The code has achieved a processing rate of over one Gflops.

  18. RELAP5-3D Code for Supercritical-Pressure Light-Water-Cooled Reactors

    SciTech Connect

    Riemke, Richard Allan; Davis, Cliff Bybee; Schultz, Richard Raphael

    2003-04-01

    The RELAP5-3D computer program has been improved for analysis of supercritical-pressure, light-water-cooled reactors. Several code modifications were implemented to correct code execution failures. Changes were made to the steam table generation, steam table interpolation, metastable states, interfacial heat transfer coefficients, and transport properties (viscosity and thermal conductivity). The code modifications now allow the code to run slow transients above the critical pressure as well as blowdown transients (modified Edwards pipe and modified existing pressurized water reactor model) that pass near the critical point.

  19. VizieR Online Data Catalog: ADAM: 3D asteroid shape reconstruction code (Viikinkoski+, 2015)

    NASA Astrophysics Data System (ADS)

    Viikinkoski, M.; Kaasalainen, M.; Durech, J.

    2015-02-01

    About the code: ADAM is a collection of routines for 3D asteroid shape reconstruction from disk-resolved observations. Any combination of lightcurves, adaptive optics images, HST/FGS data, range-Doppler radar images and disk-resolved thermal images may be used as data sources. The routines are implemented in a combination of MATLAB and C. (2 data files).

  20. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  1. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  2. Very low bit rate video coding standards

    NASA Astrophysics Data System (ADS)

    Zhang, Ya-Qin

    1995-04-01

    Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.

  3. Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Potapczuk, Mark G.

    1993-01-01

    A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by

  4. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  5. A new 3-D integral code for computation of accelerator magnets

    SciTech Connect

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.

  6. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  7. RELAP5-3D Code Includes ATHENA Features and Models

    SciTech Connect

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF{sub 6}, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  8. RELAP5-3D Code Includes Athena Features and Models

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  9. Edge Transport Modeling using the 3D EMC3-Eirene code on Tokamaks and Stellarators

    NASA Astrophysics Data System (ADS)

    Lore, J. D.; Ahn, J. W.; Briesemeister, A.; Ferraro, N.; Labombard, B.; McLean, A.; Reinke, M.; Shafer, M.; Terry, J.

    2015-11-01

    The fluid plasma edge transport code EMC3-Eirene has been applied to aid data interpretation and understanding the results of experiments with 3D effects on several tokamaks. These include applied and intrinsic 3D magnetic fields, 3D plasma facing components, and toroidally and poloidally localized heat and particle sources. On Alcator C-Mod, a series of experiments explored the impact of toroidally and poloidally localized impurity gas injection on core confinement and asymmetries in the divertor fluxes, with the differences between the asymmetry in L-mode and H-mode qualitatively reproduced in the simulations due to changes in the impurity ionization in the private flux region. Modeling of NSTX experiments on the effect of 3D fields on detachment matched the trend of a higher density at which the detachment occurs when 3D fields are applied. On DIII-D, different magnetic field models were used in the simulation and compared against the 2D Thomson scattering diagnostic. In simulating each device different aspects of the code model are tested pointing to areas where the model must be further developed. The application to stellarator experiments will also be discussed. Work supported by U.S. DOE: DE-AC05-00OR22725, DE AC02-09CH11466, DE-FC02-99ER54512, and DE-FC02-04ER54698.

  10. ATHENA 3D: A finite element code for ultrasonic wave propagation

    NASA Astrophysics Data System (ADS)

    Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.

    2014-04-01

    The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.

  11. Development of Unsteady Aerodynamic and Aeroelastic Reduced-Order Models Using the FUN3D Code

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.

    2009-01-01

    Recent significant improvements to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) are implemented into the FUN3D unstructured flow solver. These improvements include the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system via a single CFD solution, minimization of the error between the full CFD and the ROM unsteady aero- dynamic solution, and computation of a root locus plot of the aeroelastic ROM. Results are presented for a viscous version of the two-dimensional Benchmark Active Controls Technology (BACT) model and an inviscid version of the AGARD 445.6 aeroelastic wing using the FUN3D code.

  12. A quality assessment of 3D video analysis for full scale rockfall experiments

    NASA Astrophysics Data System (ADS)

    Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.

    2012-04-01

    Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results

  13. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  14. Overview of MPEG internet video coding

    NASA Astrophysics Data System (ADS)

    Wang, R. G.; Li, G.; Park, S.; Kim, J.; Huang, T.; Jang, E. S.; Gao, W.

    2015-09-01

    MPEG has produced standards that have provided the industry with the best video compression technologies. In order to address the diversified needs of the Internet, MPEG issued the Call for Proposals (CfP) for internet video coding in July, 2011. It is anticipated that any patent declaration associated with the Baseline Profile of this standard will indicate that the patent owner is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the Baseline Profile of this standard in accordance with the ITU-T/ITU-R/ISO/IEC Common Patent Policy. Three different codecs had responded to the CfP, which are WVC, VCB and IVC. WVC was proposed jointly by Apple, Cisco, Fraunhofer HHI, Magnum Semiconductor, Polycom and RIM etc. it's in fact AVC baseline. VCB was proposed by Google, and it's in fact VP8. IVC was proposed by several Universities (Peking University, Tsinghua University, Zhejiang University, Hanyang University and Korea Aerospace University etc.) and its coding tools was developed from Zero. In this paper, we give an overview of the coding tools in IVC, and evaluate its performance by comparing it with WVC, VCB and AVC High Profile.

  15. Embedded multiple description coding of video.

    PubMed

    Verdicchio, Fabio; Munteanu, Adrian; Gavrilescu, Augustin I; Cornelis, Jan; Schelkens, Peter

    2006-10-01

    Real-time delivery of video over best-effort error-prone packet networks requires scalable erasure-resilient compression systems in order to 1) meet the users' requirements in terms of quality, resolution, and frame-rate; 2) dynamically adapt the rate to the available channel capacity; and 3) provide robustness to data losses, as retransmission is often impractical. Furthermore, the employed erasure-resilience mechanisms should be scalable in order to adapt the degree of resiliency against transmission errors to the varying channel conditions. Driven by these constraints, we propose in this paper a novel design for scalable erasure-resilient video coding that couples the compression efficiency of the open-loop architecture with the robustness provided by multiple description coding. In our approach, scalability and packet-erasure resilience are jointly provided via embedded multiple description scalar quantization. Furthermore, a novel channel-aware rate-allocation technique is proposed that allows for shaping on-the-fly the output bit rate and the degree of resiliency without resorting to channel coding. As a result, robustness to data losses is traded for better visual quality when transmission occurs over reliable channels, while erasure resilience is introduced when noisy links are involved. Numerical results clearly demonstrate the advantages of the proposed approach over equivalent codec instantiations employing 1) no erasure-resilience mechanisms, 2) erasure-resilience with nonscalable redundancy, or 3) data-partitioning principles.

  16. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  17. Overview of the H.264/AVC video coding standard

    NASA Astrophysics Data System (ADS)

    Luthra, Ajay; Topiwala, Pankaj N.

    2003-11-01

    H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.

  18. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    SciTech Connect

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  19. Coupling of PIES 3-D Equilibrium Code and NIFS Bootstrap Code with Applications to the Computation of Stellarator Equilibria

    NASA Astrophysics Data System (ADS)

    Monticello, D. A.; Reiman, A. H.; Watanabe, K. Y.; Nakajima, N.; Okamoto, M.

    1997-11-01

    The existence of bootstrap currents in both tokamaks and stellarators was confirmed, experimentally, more than ten years ago. Such currents can have significant effects on the equilibrium and stability of these MHD devices. In addition, stellarators, with the notable exception of W7-X, are predicted to have such large bootstrap currents that reliable equilibrium calculations require the self-consistent evaluation of bootstrap currents. Modeling of discharges which contain islands requires an algorithm that does not assume good surfaces. Only one of the two 3-D equilibrium codes that exist, PIES( Reiman, A. H., Greenside, H. S., Compt. Phys. Commun. 43), (1986)., can easily be modified to handle bootstrap current. Here we report on the coupling of the PIES 3-D equilibrium code and NIFS bootstrap code(Watanabe, K., et al., Nuclear Fusion 35) (1995), 335.

  20. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    PubMed Central

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-01-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477

  1. Equation-of-State Test Suite for the DYNA3D Code

    SciTech Connect

    Benjamin, Russell D.

    2015-11-05

    This document describes the creation and implementation of a test suite for the Equationof- State models in the DYNA3D code. A customized input deck has been created for each model, as well as a script that extracts the relevant data from the high-speed edit file created by DYNA3D. Each equation-of-state model is broken apart and individual elements of the model are tested, as well as testing the entire model. The input deck for each model is described and the results of the tests are discussed. The intent of this work is to add this test suite to the validation suite presently used for DYNA3D.

  2. Assessing the performance of a parallel MATLAB-based 3D convection code

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, G. J.; Hasenclever, J.; Phipps Morgan, J.; Shi, C.

    2008-12-01

    We are currently building 2D and 3D MATLAB-based parallel finite element codes for mantle convection and melting. The codes use the MATLAB implementation of core MPI commands (eg. Send, Receive, Broadcast) for message passing between computational subdomains. We have found that code development and algorithm testing are much faster in MATLAB than in our previous work coding in C or FORTRAN, this code was built from scratch with only 12 man-months of effort. The one extra cost w.r.t. C coding on a Beowulf cluster is the cost of the parallel MATLAB license for a >4core cluster. Here we present some preliminary results on the efficiency of MPI messaging in MATLAB on a small 4 machine, 16core, 32Gb RAM Intel Q6600 processor-based cluster. Our code implements fully parallelized preconditioned conjugate gradients with a multigrid preconditioner. Our parallel viscous flow solver is currently 20% slower for a 1,000,000 DOF problem on a single core in 2D as the direct solve MILAMIN MATLAB viscous flow solver. We have tested both continuous and discontinuous pressure formulations. We test with various configurations of network hardware, CPU speeds, and memory using our own and MATLAB's built in cluster profiler. So far we have only explored relatively small (up to 1.6GB RAM) test problems. We find that with our current code and Intel memory controller bandwidth limitations we can only get ~2.3 times performance out of 4 cores than 1 core per machine. Even for these small problems the code runs faster with message passing between 4 machines with one core each than 1 machine with 4 cores and internal messaging (1.29x slower), or 1 core (2.15x slower). It surprised us that for 2D ~1GB-sized problems with only 3 multigrid levels, the direct- solve on the coarsest mesh consumes comparable time to the iterative solve on the finest mesh - a penalty that is greatly reduced either by using a 4th multigrid level or by using an iterative solve at the coarsest grid level. We plan to

  3. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  4. Simulations of implosions with a 3D, parallel, unstructured-grid, radiation-hydrodynamics code

    SciTech Connect

    Kaiser, T B; Milovich, J L; Prasad, M K; Rathkopf, J; Shestakov, A I

    1998-12-28

    An unstructured-grid, radiation-hydrodynamics code is used to simulate implosions. Although most of the problems are spherically symmetric, they are run on 3D, unstructured grids in order to test the code's ability to maintain spherical symmetry of the converging waves. Three problems, of increasing complexity, are presented. In the first, a cold, spherical, ideal gas bubble is imploded by an enclosing high pressure source. For the second, we add non-linear heat conduction and drive the implosion with twelve laser beams centered on the vertices of an icosahedron. In the third problem, a NIF capsule is driven with a Planckian radiation source.

  5. Compact encoding of 3-D voxel surfaces based on pattern code representation.

    PubMed

    Kim, Chang-Su; Lee, Sang-Uk

    2002-01-01

    In this paper, we propose a lossless compression algorithm for three-dimensional (3-D) binary voxel surfaces, based on the pattern code representation (PCR). In PCR, a voxel surface is represented by a series of pattern codes. The pattern of a voxel v is defined as the 3 x 3 x 3 array of voxels, centered on v. Therefore, the pattern code for informs of the local shape of the voxel surface around . The proposed algorithm can achieve the coding gain, since the patterns of adjacent voxels are highly correlated to each other. The performance of the proposed algorithm is evaluated using various voxel surfaces, which are scan-converted from triangular mesh models. It is shown that the proposed algorithm requires only 0.5 approximately 1 bits per black voxel (bpbv) to store or transmit the voxel surfaces.

  6. A new technique of recognition for coded targets in optical 3D measurement

    NASA Astrophysics Data System (ADS)

    Guo, Changye; Cheng, Xiaosheng; Cui, Haihua; Dai, Ning; Weng, Jinping

    2014-11-01

    A new technique for coded targets recognition in optical 3D-measurement application is proposed in this paper. Traditionally, point cloud registration is based on homologous features, such as the curvature, which is time-consuming and not reliable. For this, we paste some coded targets onto the surface of the object to be measured to improve the optimum target location and accurate correspondence among multi-source images. Circular coded targets are used, and an algorithm to automatically detecting them is proposed. This algorithm extracts targets with intensive bimodal histogram features from complex background, and filters noise according to their size, shape and intensity. In addition, the coded targets' identification is conducted out by their ring codes. We affine them around the circle inversely, set foreground and background respectively as 1 and 0 to constitute a binary number, and finally shift one bit every time to calculate a decimal one of the binary number to determine a minimum decimal number as its code. In this 3Dmeasurement application, we build a mutual relationship between different viewpoints containing three or more coded targets with different codes. Experiments show that it is of efficiency to obtain global surface data of an object to be measured and is robust to the projection angles and noise.

  7. The emerging High Efficiency Video Coding standard (HEVC)

    NASA Astrophysics Data System (ADS)

    Raja, Gulistan; Khan, Awais

    2013-12-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC.

  8. The Transient 3-D Transport Coupled Code TORT-TD/ATTICA3D for High-Fidelity Pebble-Bed HTGR Analyses

    NASA Astrophysics Data System (ADS)

    Seubert, Armin; Sureda, Antonio; Lapins, Janis; Bader, Johannes; Laurien, Eckart

    2012-01-01

    This article describes the 3D discrete ordinates-based coupled code system TORT-TD/ATTICA3D that aims at steady state and transient analyses of pebble-bed high-temperature gas cooled reactors. In view of increasing computing power, the application of time-dependent neutron transport methods becomes feasible for best estimate evaluations of safety margins. The calculation capabilities of TORT-TD/ATTICA3D are presented along with the coupling approach, with focus on the time-dependent neutron transport features of TORT-TD. Results obtained for the OECD/NEA/NSC PBMR-400 benchmark demonstrate the transient capabilities of TORT-TD/ATTICA3D.

  9. User Guide for the R5EXEC Coupling Interface in the RELAP5-3D Code

    SciTech Connect

    Forsmann, J. Hope; Weaver, Walter L.

    2015-04-01

    This report describes the R5EXEC coupling interface in the RELAP5-3D computer code from the users perspective. The information in the report is intended for users who want to couple RELAP5-3D to other thermal-hydraulic, neutron kinetics, or control system simulation codes.

  10. Development of a GPU-Accelerated 3-D Full-Wave Code for Reflectometry Simulations

    NASA Astrophysics Data System (ADS)

    Reuther, K. S.; Kubota, S.; Feibush, E.; Johnson, I.

    2013-10-01

    1-D and 2-D full-wave codes used as synthetic diagnostics in microwave reflectometry are standard tools for understanding electron density fluctuations in fusion plasmas. The accuracy of the code depends on how well the wave properties along the ignored dimensions can be pre-specified or neglected. In a toroidal magnetic geometry, such assumptions are never strictly correct and ray tracing has shown that beam propagation is inherently a 3-D problem. Previously, we reported on the application of GPGPU's (General-Purpose computing on Graphics Processing Units) to a 2-D FDTD (Finite-Difference Time-Domain) code ported to utilize the parallel processing capabilities of the NVIDIA C870 and C1060. Here, we report on the development of a FDTD code for 3-D problems. Initial tests will use NVIDIA's M2070 GPU and concentrate on the launching and propagation of Gaussian beams in free space. If available, results using a plasma target will also be presented. Performance will be compared with previous generations of GPGPU cards as well as with NVIDIA's newest K20C GPU. Finally, the possibility of utilizing multiple GPGPU cards in a cluster environment or in a single node will also be discussed. Supported by U.S. DoE Grants DE-FG02-99-ER54527 and DE-AC02-09CH11466 and the DoE National Undergraduate Fusion Fellowship.

  11. A 3-D Vortex Code for Parachute Flow Predictions: VIPAR Version 1.0

    SciTech Connect

    STRICKLAND, JAMES H.; HOMICZ, GREGORY F.; PORTER, VICKI L.; GOSSLER, ALBERT A.

    2002-07-01

    This report describes a 3-D fluid mechanics code for predicting flow past bluff bodies whose surfaces can be assumed to be made up of shell elements that are simply connected. Version 1.0 of the VIPAR code (Vortex Inflation PARachute code) is described herein. This version contains several first order algorithms that we are in the process of replacing with higher order ones. These enhancements will appear in the next version of VIPAR. The present code contains a motion generator that can be used to produce a large class of rigid body motions. The present code has also been fully coupled to a structural dynamics code in which the geometry undergoes large time dependent deformations. Initial surface geometry is generated from triangular shell elements using a code such as Patran and is written into an ExodusII database file for subsequent input into VIPAR. Surface and wake variable information is output into two ExodusII files that can be post processed and viewed using software such as EnSight{trademark}.

  12. PRONTO3D users` instructions: A transient dynamic code for nonlinear structural analysis

    SciTech Connect

    Attaway, S.W.; Mello, F.J.; Heinstein, M.W.; Swegle, J.W.; Ratner, J.A.; Zadoks, R.I.

    1998-06-01

    This report provides an updated set of users` instructions for PRONTO3D. PRONTO3D is a three-dimensional, transient, solid dynamics code for analyzing large deformations of highly nonlinear materials subjected to extremely high strain rates. This Lagrangian finite element program uses an explicit time integration operator to integrate the equations of motion. Eight-node, uniform strain, hexahedral elements and four-node, quadrilateral, uniform strain shells are used in the finite element formulation. An adaptive time step control algorithm is used to improve stability and performance in plasticity problems. Hourglass distortions can be eliminated without disturbing the finite element solution using either the Flanagan-Belytschko hourglass control scheme or an assumed strain hourglass control scheme. All constitutive models in PRONTO3D are cast in an unrotated configuration defined using the rotation determined from the polar decomposition of the deformation gradient. A robust contact algorithm allows for the impact and interaction of deforming contact surfaces of quite general geometry. The Smooth Particle Hydrodynamics method has been embedded into PRONTO3D using the contact algorithm to couple it with the finite element method.

  13. Robust 3D face landmark localization based on local coordinate coding.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J

    2014-12-01

    In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy. PMID:25296404

  14. Spacecraft charging analysis with the implicit particle-in-cell code iPic3D

    SciTech Connect

    Deca, J.; Lapenta, G.; Marchand, R.; Markidis, S.

    2013-10-15

    We present the first results on the analysis of spacecraft charging with the implicit particle-in-cell code iPic3D, designed for running on massively parallel supercomputers. The numerical algorithm is presented, highlighting the implementation of the electrostatic solver and the immersed boundary algorithm; the latter which creates the possibility to handle complex spacecraft geometries. As a first step in the verification process, a comparison is made between the floating potential obtained with iPic3D and with Orbital Motion Limited theory for a spherical particle in a uniform stationary plasma. Second, the numerical model is verified for a CubeSat benchmark by comparing simulation results with those of PTetra for space environment conditions with increasing levels of complexity. In particular, we consider spacecraft charging from plasma particle collection, photoelectron and secondary electron emission. The influence of a background magnetic field on the floating potential profile near the spacecraft is also considered. Although the numerical approaches in iPic3D and PTetra are rather different, good agreement is found between the two models, raising the level of confidence in both codes to predict and evaluate the complex plasma environment around spacecraft.

  15. A quasi-3D viscous-inviscid interaction code: Q3UIC

    NASA Astrophysics Data System (ADS)

    García, N. R.; Sørensen, J. N.; Shen, W. Z.

    2014-12-01

    A computational model for predicting the aerodynamic behavior of wind turbine airfoils under rotation and subjected to steady and unsteady motions developed in [1] is presented herein. The model is based on a viscous-inviscid interaction technique using strong coupling between the viscous and inviscid parts. The rotational effects generated by centrifugal and Coriolis forces are introduced in Q3UIC via the streamwise and spanwise integral boundary layer momentum equations. A special inviscid version of the code has been developed to cope with massive separation. To check the ability of the code wind turbine airfoils in steady and unsteady conditions for a large range of angles of attack are considered here. Further, the new quasi-3D code Q3UIC is used to perform a parametric study of a wind turbine airfoil under rotation confined to its boundary layer.

  16. GPU-accelerated 3D neutron diffusion code based on finite difference method

    SciTech Connect

    Xu, Q.; Yu, G.; Wang, K.

    2012-07-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  17. A novel sensor system for 3D face scanning based on infrared coded light

    NASA Astrophysics Data System (ADS)

    Modrow, Daniel; Laloni, Claudio; Doemens, Guenter; Rigoll, Gerhard

    2008-02-01

    In this paper we present a novel sensor system for three-dimensional face scanning applications. Its operating principle is based on active triangulation with a color coded light approach. As it is implemented in the near infrared band, the used light is invisible for human perception. Though the proposed sensor is primarily designed for face scanning and biometric applications, its performance characteristics are beneficial for technical applications as well. The acquisition of 3d data is real-time capable, provides accurate and high resolution depthmaps and shows high robustness against ambient light. Hence most of the limiting factors of other sensors for 3d and face scanning applications are eliminated, such as blinding and annoying light patterns, motion constraints and highly restricted scenarios due to ambient light constraints.

  18. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  19. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  20. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  1. Foveation scalable video coding with automatic fixation selection.

    PubMed

    Wang, Zhou; Lu, Ligang; Bovik, Alan Conrad

    2003-01-01

    Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability, robustness, and security. In this paper, we follow two recent trends in image and video coding research. One is to incorporate human visual system (HVS) models to improve the current state-of-the-art of image and video coding algorithms by better exploiting the properties of the intended receiver. The other is to design rate scalable image and video codecs, which allow the extraction of coded visual information at continuously varying bit rates from a single compressed bitstream. Specifically, we propose a foveation scalable video coding (FSVC) algorithm which supplies good quality-compression performance as well as effective rate scalability. The key idea is to organize the encoded bitstream to provide the best decoded video at an arbitrary bit rate in terms of foveated visual quality measurement. A foveation-based HVS model plays an important role in the algorithm. The algorithm is adaptable to different applications, such as knowledge-based video coding and video communications over time-varying, multiuser and interactive networks. PMID:18237905

  2. An analysis of brightness as a factor in visual discomfort caused by watching stereoscopic 3D video

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Woo; Kang, Hang-Bong

    2015-05-01

    Even though various research has examined the factors that cause visual discomfort in watching stereoscopic 3D video, the brightness factor has not been dealt with sufficiently. In this paper, we analyze visual discomfort under various illumination conditions by considering eye-blinking rate and saccadic eye movement. In addition, we measure the perceived depth before and after watching 3D stereoscopic video by using our own 3D depth measurement instruments. Our test sequences consist of six illumination conditions for background. The illumination is changed from bright to dark or vice-versa, while the illumination of the foreground object is constant. Our test procedure is as follows: First, the subjects are rested until a baseline of no visual discomfort is established. Then, the subjects answer six questions to check their subjective pre-stimulus discomfort level. Next, we measure perceived depth for each subject, and the subjects watch 30-minute stereoscopic 3D or 2D video clips in random order. We measured eye-blinking and saccadic movements of the subject using an eye-tracking device. Then, we measured perceived depth for each subject again to detect any changes in depth perception. We also checked the subject's post-stimulus discomfort level, and measured the perceived depth after a 40-minute post-experiment resting period to measure recovery levels. After 40 minutes, most subjects returned to normal levels of depth perception. From our experiments, we found that eye-blinking rates were higher with a dark to light video progression than vice-versa. Saccadic eye movements were a lower with a dark to light video progression than viceversa.

  3. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  4. The PIES2012 Code for Calculating 3D Equilibria with Islands and Stochastic Regions

    NASA Astrophysics Data System (ADS)

    Monticello, Donald; Reiman, Allan; Raburn, Daniel

    2013-10-01

    We have made major modifications to the PIES 3D equilibrium code to produce a new version, PIES2012. The new version uses an adaptive radial grid for calculating equilibrium currents. A subset of the flux surfaces conform closely to island separatrices, providing an accurate treatment of the effects driving the neoclassical tearing mode. There is now a set of grid surfaces that conform to the flux surfaces in the interiors of the islands, allowing the proper treatment of the current profiles in the islands, which play an important role in tearing phenomena. We have verified that we can introduce appropriate current profiles in the islands to suppress their growth, allowing us to simulate situations where islands are allowed to grow at some rational surfaces but not others. Placement of grid surfaces between islands is guided by the locations of high order fixed points, allowing us to avoid spectral polution and providing a more robust, and smoother convergence of the code. The code now has an option for turning on a vertical magnetic field to fix the position of the magnetic axis, which models the horizontal feedback positioning of a tokamak plasma. The code has a new option for using a Jacobian-Free Newton Krylov scheme for convergence. The code now also contains a model that properly handles stochastic regions with nonzero pressure gradients. Work supported by DOE contract DE-AC02-09CH11466.

  5. Implementation of the 3D edge plasma code EMC3-EIRENE on NSTX

    SciTech Connect

    Lore, J. D.; Canik, J. M.; Feng, Y.; Ahn, J. -W.; Maingi, R.; Soukhanovskii, V.

    2012-05-09

    The 3D edge transport code EMC3-EIRENE has been applied for the first time to the NSTX spherical tokamak. A new disconnected double null grid has been developed to allow the simulation of plasma where the radial separation of the inner and outer separatrix is less than characteristic widths (e.g. heat flux width) at the midplane. Modelling results are presented for both an axisymmetric case and a case where 3D magnetic field is applied in an n = 3 configuration. In the vacuum approximation, the perturbed field consists of a wide region of destroyed flux surfaces and helical lobes which are a mixture of long and short connection length field lines formed by the separatrix manifolds. This structure is reflected in coupled 3D plasma fluid (EMC3) and kinetic neutral particle (EIRENE) simulations. The helical lobes extending inside of the unperturbed separatrix are filled in by hot plasma from the core. The intersection of the lobes with the divertor results in a striated flux footprint pattern on the target plates. As a result, profiles of divertor heat and particle fluxes are compared with experimental data, and possible sources of discrepancy are discussed.

  6. Implementation of the 3D edge plasma code EMC3-EIRENE on NSTX

    DOE PAGES

    Lore, J. D.; Canik, J. M.; Feng, Y.; Ahn, J. -W.; Maingi, R.; Soukhanovskii, V.

    2012-05-09

    The 3D edge transport code EMC3-EIRENE has been applied for the first time to the NSTX spherical tokamak. A new disconnected double null grid has been developed to allow the simulation of plasma where the radial separation of the inner and outer separatrix is less than characteristic widths (e.g. heat flux width) at the midplane. Modelling results are presented for both an axisymmetric case and a case where 3D magnetic field is applied in an n = 3 configuration. In the vacuum approximation, the perturbed field consists of a wide region of destroyed flux surfaces and helical lobes which aremore » a mixture of long and short connection length field lines formed by the separatrix manifolds. This structure is reflected in coupled 3D plasma fluid (EMC3) and kinetic neutral particle (EIRENE) simulations. The helical lobes extending inside of the unperturbed separatrix are filled in by hot plasma from the core. The intersection of the lobes with the divertor results in a striated flux footprint pattern on the target plates. As a result, profiles of divertor heat and particle fluxes are compared with experimental data, and possible sources of discrepancy are discussed.« less

  7. Newly-Developed 3D GRMHD Code and its Application to Jet Formation

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous model. The . preliminary results show the jet formations from a geometrically thin accretion disk near a non-rotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field strength.

  8. Radiation Coupling with the FUN3D Unstructured-Grid CFD Code

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    2012-01-01

    The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.

  9. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    NASA Astrophysics Data System (ADS)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  10. Surface 3D nanostructuring by tightly focused laser pulse: simulations by Lagrangian code and molecular dynamics

    NASA Astrophysics Data System (ADS)

    Inogamov, Nail A.; Zhakhovsky, Vasily V.

    2016-02-01

    There are many important applications in which the ultrashort diffraction-limited and therefore tightly focused laser pulses irradiates metal films mounted on dielectric substrate. Here we present the detailed picture of laser peeling and 3D structure formation of the thin (relative to a depth of a heat affected zone in the bulk targets) gold films on glass substrate. The underlying physics of such diffraction-limited laser peeling was not well understood previously. Our approach is based on a physical model which takes into consideration the new calculations of the two-temperature (2T) equation of state (2T EoS) and the two-temperature transport coefficients together with the coupling parameter between electron and ion subsystems. The usage of the 2T EoS and the kinetic coefficients is required because absorption of an ultrashort pulse with duration of 10-1000 fs excites electron subsystem of metal and transfers substance into the 2T state with hot electrons (typical electron temperatures 1-3 eV) and much colder ions. It is shown that formation of submicrometer-sized 3D structures is a result of the electron-ion energy transfer, melting, and delamination of film from substrate under combined action of electron and ion pressures, capillary deceleration of the delaminated liquid metal or semiconductor, and ultrafast freezing of molten material. We found that the freezing is going in non-equilibrium regime with strongly overcooled liquid phase. In this case the Stefan approximation is non-applicable because the solidification front speed is limited by the diffusion rate of atoms in the molten material. To solve the problem we have developed the 2T Lagrangian code including all this reach physics in. We also used the high-performance combined Monte- Carlo and molecular dynamics code for simulation of surface 3D nanostructuring at later times after completion of electron-ion relaxation.

  11. Conclusions of the M3D/NIMROD Cross-Code Benchmark

    NASA Astrophysics Data System (ADS)

    Breslau, J.; Park, W.; Jardin, S.; Strauss, H.; Schnack, D.; Pankin, A.

    2004-11-01

    Cross-validation of the nonlinear M3D [1] and NIMROD [2] codes in the resistive MHD regime in tokamaks has been brought to a successful conclusion. The small but well-diagnosed CDX-U device was selected for the benchmark because its low temperature (S < 10^5) is readily handled by the two codes. The test problem consisted of determining the growth rates, eigenfunctions, and nonlinear evolution of resistive internal kink modes from a base equilibrium with q_0≈ 0.92. Good agreement between the codes is observed in all three predictions. However, there is an unexpected lack of agreement between these predictions and experimental observations: whereas the 1,1 sawtooth crash in the device is a repeating phenomenon consistent with the survival of the discharge, both codes predict a spectrum of unstable resistive ballooning modes whose growth rate increases with toroidal mode number n>1, occurring near the plasma boundary and present even when q_0>1. These findings call into question the applicability of the resistive MHD model even to low temperature tokamak plasmas and suggest the need for the addition of two-fluid terms or other new physics to make accurate predictions of their behavior. [1] W. Park, et al., Phys. Plasmas 6, 1796 (1999). [2] C.R. Sovinec, et al., Phys. Plasmas 10, 1727 (2003).

  12. What factors are related to understanding a stereoscopic 3D diabetes educational video in seniors?

    PubMed

    Liu, Chiung-ju; William, Albert

    2014-10-01

    The rise of three-dimensional imaging technology and products offers a new avenue for patient education to older adults. This study investigated older adults' perception of a three-dimensional health education video on diabetes, and factors associated with understanding the video. Twenty-one older adults without a history of diabetes watched a short diabetes educational video on a stereoscopic display. They perceived the video as helpful, valuable, and exciting, but too fast. Better understanding of the video is associated with having higher background knowledge of diabetes and greater vocabulary. Ethnicity is also a potential factor. Older adults may choose narrative information over graphic information to process a three-dimensional multimedia presentation.

  13. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  14. The H.264/MPEG4 advanced video coding

    NASA Astrophysics Data System (ADS)

    Gromek, Artur

    2009-06-01

    H.264/MPEG4-AVC is the newest video coding standard recommended by International Telecommunication Union - Telecommunication Standardization Section (ITU-T) and the ISO/IEC Moving Picture Expert Group (MPEG). The H.264/MPEG4-AVC has recently become leading standard for generic audiovisual services, since deployment for digital television. Nowadays is commonly used in wide range of video application ranging like mobile services, videoconferencing, IPTV, HDTV, video storage and many more. In this article, author briefly describes the technology applied in the H.264/MPEG4-AVC video coding standard, the way of real-time implementation and the way of future development.

  15. Simulation of a Synthetic Jet in Quiescent Air Using TLNS3D Flow Code

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Turkel, Eli

    2007-01-01

    Although the actuator geometry is highly three-dimensional, the outer flowfield is nominally two-dimensional because of the high aspect ratio of the rectangular slot. For the present study, this configuration is modeled as a two-dimensional problem. A multi-block structured grid available at the CFDVAL2004 website is used as a baseline grid. The periodic motion of the diaphragm is simulated by specifying a sinusoidal velocity at the diaphragm surface with a frequency of 450 Hz, corresponding to the experimental setup. The amplitude is chosen so that the maximum Mach number at the jet exit is approximately 0.1, to replicate the experimental conditions. At the solid walls zero slip, zero injection, adiabatic temperature and zero pressure gradient conditions are imposed. In the external region, symmetry conditions are imposed on the side (vertical) boundaries and far-field conditions are imposed on the top boundary. A nominal free-stream Mach number of 0.001 is imposed in the free stream to simulate incompressible flow conditions in the TLNS3D code, which solves compressible flow equations. The code was run in unsteady (URANS) mode until the periodicity was established. The time-mean quantities were obtained by running the code for at least another 15 periods and averaging the flow quantities over these periods. The phase-locked average of flow quantities were assumed to be coincident with their values during the last full time period.

  16. A 3D Parallel Beam Dynamics Code for Modeling High Brightness Beams in Photoinjectors

    SciTech Connect

    Qiang, Ji; Lidia, S.; Ryne, R.D.; Limborg, C.; /SLAC

    2006-02-13

    In this paper we report on IMPACT-T, a 3D beam dynamics code for modeling high brightness beams in photoinjectors and rf linacs. IMPACT-T is one of the few codes used in the photoinjector community that has a parallel implementation, making it very useful for high statistics simulations of beam halos and beam diagnostics. It has a comprehensive set of beamline elements, and furthermore allows arbitrary overlap of their fields. It is unique in its use of space-charge solvers based on an integrated Green function to efficiently and accurately treat beams with large aspect ratio, and a shifted Green function to efficiently treat image charge effects of a cathode. It is also unique in its inclusion of energy binning in the space-charge calculation to model beams with large energy spread. Together, all these features make IMPACT-T a powerful and versatile tool for modeling beams in photoinjectors and other systems. In this paper we describe the code features and present results of IMPACT-T simulations of the LCLS photoinjectors. We also include a comparison of IMPACT-T and PARMELA results.

  17. A 3d Parallel Beam Dynamics Code for Modeling High BrightnessBeams in Photoinjectors

    SciTech Connect

    Qiang, J.; Lidia, S.; Ryne, R.; Limborg, C.

    2005-05-16

    In this paper we report on IMPACT-T, a 3D beam dynamics code for modeling high brightness beams in photoinjectors and rf linacs. IMPACT-T is one of the few codes used in the photoinjector community that has a parallel implementation, making it very useful for high statistics simulations of beam halos and beam diagnostics. It has a comprehensive set of beamline elements, and furthermore allows arbitrary overlap of their fields. It is unique in its use of space-charge solvers based on an integrated Green function to efficiently and accurately treat beams with large aspect ratio, and a shifted Green function to efficiently treat image charge effects of a cathode. It is also unique in its inclusion of energy binning in the space-charge calculation to model beams with large energy spread. Together, all these features make IMPACT-T a powerful and versatile tool for modeling beams in photoinjectors and other systems. In this paper we describe the code features and present results of IMPACT-T simulations of the LCLS photoinjectors. We also include a comparison of IMPACT-T and PARMELA results.

  18. Code verification for unsteady 3-D fluid-solid interaction problems

    NASA Astrophysics Data System (ADS)

    Yu, Kintak Raymond; Étienne, Stéphane; Hay, Alexander; Pelletier, Dominique

    2015-12-01

    This paper describes a procedure to synthesize Manufactured Solutions for Code Verification of an important class of Fluid-Structure Interaction (FSI) problems whose behaviors can be modeled as rigid body vibrations in incompressible fluids. We refer this class of FSI problems as Fluid-Solid Interaction problems, which can be found in many practical engineering applications. The methodology can be utilized to develop Manufactured Solutions for both 2-D and 3-D cases. We demonstrate the procedure with our numerical code. We present details of the formulation and methodology. We also provide the reasonings behind our proposed approach. Results from grid and time step refinement studies confirm the verification of our solver and demonstrate the versatility of the simple synthesis procedure. In addition, the results also demonstrate that the modified decoupled approach to verify flow problems with high-order time-stepping schemes can be employed equally well to verify code for multi-physics problems (here, those of the Fluid-Solid Interaction) when the numerical discretization is based on the Method of Lines.

  19. Long-term radiation belt simulation with the VERB 3-D code: Comparison with CRRES observations

    NASA Astrophysics Data System (ADS)

    Subbotin, D. A.; Shprits, Y. Y.; Ni, B.

    2011-12-01

    Highly energetic electrons in the Earth’s radiation belts are hazardous for satellite equipment. Fluxes of relativistic electrons can vary by orders of magnitude during geomagnetic storms. The evolution of relativistic electron fluxes in the radiation belts is described by the 3-D Fokker-Planck equation in terms of the radial distance, energy, and equatorial pitch angle. To better understand the mechanisms that control radiation belt acceleration and loss and particle flux dynamics, we present a long-term radiation belt simulation for 100 days from 29 July to 6 November 1990 with the 3-D Versatile Electron Radiation Belt (VERB) code and compare the results with the electron fluxes observed by the Combined Release and Radiation Effects Satellite (CRRES). We also perform a comparison of Phase Space Density with a multisatellite reanalysis obtained by using Kalman filtering of observations from CRRES, Geosynchronous (GEO), GPS, and Akebono satellites. VERB 3-D simulations include radial, energy, and pitch angle diffusion and mixed energy and pitch angle diffusion driven by electromagnetic waves inside the magnetosphere with losses to the atmosphere. Boundary conditions account for the convective source of electrons and loss to the magnetopause. The results of the simulation that include all of the above processes show a good agreement with the data. The agreement implies that these processes are important for the radiation belt electron dynamics and therefore should be accounted for in outer radiation belt simulations. We also show that the results are very sensitive to the assumed wave model. Our simulations are driven only by the variation of the Kp index and variations of the seed electron population around geosynchronous orbit, which allows the model to be used for forecasting and nowcasting.

  20. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  1. GATOR: A 3-D time-dependent simulation code for helix TWTs

    SciTech Connect

    Zaidman, E.G.; Freund, H.P.

    1996-12-31

    A 3D nonlinear analysis of helix TWTs is presented. The analysis and simulation code is based upon a spectral decomposition using the vacuum sheath helix modes. The field equations are integrated on a grid and advanced in time using a MacCormack predictor-corrector scheme, and the electron orbit equations are integrated using a fourth order Runge-Kutta algorithm. Charge is accumulated on the grid and the field is interpolated to the particle location by a linear map. The effect of dielectric liners on the vacuum sheath helix dispersion is included in the analysis. Several numerical cases are considered. Simulation of the injection of a DC beam and a signal at a single frequency is compared with a linear field theory of the helix TWT interaction, and good agreement is found.

  2. Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Vienne, Cyril; Blondé, Laurent

    2013-03-01

    Visual attention is an inherent mechanism that plays an important role in the human visual perception. As our visual system has limited capacity and cannot efficiently process the information from the entire visual field, we focus our attention on specific areas of interest in the image for detailed analysis of these areas. In the context of media entertainment, the viewers' visual attention deployment is also influenced by the art of visual storytelling. To this date, visual editing and composition of scenes in stereoscopic 3D content creation still mostly follows those used in 2D. In particular, out-of-focus blur is often used in 2D motion pictures and photography to drive the viewer's attention towards a sharp area of the image. In this paper, we study specifically the impact of defocused foreground objects on visual attention deployment in stereoscopic 3D content. For that purpose, we conducted a subjective experiment using an eyetracker. Our results bring more insights on the deployment of visual attention in stereoscopic 3D content viewing, and provide further understanding on visual attention behavior differences between 2D and 3D. Our results show that a traditional 2D scene compositing approach such as the use of foreground blur does not necessarily produce the same effect on visual attention deployment in 2D and 3D. Implications for stereoscopic content creation and visual fatigue are discussed.

  3. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  4. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction. PMID:25122851

  5. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  6. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  7. Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code System.

    2013-06-24

    Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check author’s homepage for related information: http

  8. Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping

    2010-01-01

    The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.

  9. Status and future of the 3D MAFIA group of codes

    SciTech Connect

    Ebeling, F.; Klatt, R.; Krawzcyk, F.; Lawinsky, E.; Weiland, T.; Wipf, S.G.; Steffen, B.; Barts, T.; Browman, J.; Cooper, R.K.; and others

    1988-12-01

    The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel. Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.

  10. Optimizing Antenna Layout for ITER Low Field Side Reflectometer using 3D Ray Tracing Code

    NASA Astrophysics Data System (ADS)

    Newbury, Sarah; Zolfaghari, Ali

    2014-10-01

    The ITER Low Field Side Reflectometer (LFSR) is being designed to provide electron density profile measurements for both the core and edge plasma through the launching of millimeter waves into the plasma and the detection of the signal of the reflected wave by a receive antenna. Because the detection of the received signal is integral to the determination of the density profile, an important goal in designing the LFSR is to optimize the coupling between launch and receive antennas. This project investigates this subject by using Genray, a 3D ray tracing code, to simulate the propagation of millimeter waves launched into and reflected by the plasma for a typical ITER case. Based upon the results of the code, beam footprints will be estimated for different cases in which both the height and toroidal angle of the launch antenna are varied. The footprints will be compared, allowing conclusions to be drawn about the optimal antenna layout for the LFSR. This method will be carried out for various frequencies of both O-mode and X-mode waves, and the effect of the scrape-off layer of the plasma will also be considered.

  11. FERM3D: A finite element R-matrix electron molecule scattering code

    NASA Astrophysics Data System (ADS)

    Tonzani, Stefano

    2007-01-01

    FERM3D is a three-dimensional finite element program, for the elastic scattering of a low energy electron from a general polyatomic molecule, which is converted to a potential scattering problem. The code is based on tricubic polynomials in spherical coordinates. The electron-molecule interaction is treated as a sum of three terms: electrostatic, exchange, and polarization. The electrostatic term can be extracted directly from ab initio codes ( GAUSSIAN 98 in the work described here), while the exchange term is approximated using a local density functional. A local polarization potential based on density functional theory [C. Lee, W. Yang, R.G. Parr, Phys. Rev. B 37 (1988) 785] describes the long range attraction to the molecular target induced by the scattering electron. Photoionization calculations are also possible and illustrated in the present work. The generality and simplicity of the approach is important in extending electron-scattering calculations to more complex targets than it is possible with other methods. Program summaryTitle of program:FERM3D Catalogue identifier:ADYL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested:Intel Xeon, AMD Opteron 64 bit, Compaq Alpha Operating systems or monitors under which the program has been tested:HP Tru64 Unix v5.1, Red Hat Linux Enterprise 3 Programming language used:Fortran 90 Memory required to execute with typical data:900 MB (neutral CO 2), 2.3 GB (ionic CO 2), 1.4 GB (benzene) No. of bits in a word:32 No. of processors used:1 Has the code been vectorized?:No No. of lines in distributed program, including test data, etc.:58 383 No. of bytes in distributed program, including test data, etc.:561 653 Distribution format:tar.gzip file CPC Program library subprograms used:ADDA, ACDP Nature of physical problem:Scattering of an

  12. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new

  13. 3-D Computer Animation vs. Live-Action Video: Differences in Viewers' Response to Instructional Vignettes

    ERIC Educational Resources Information Center

    Smith, Dennie; McLaughlin, Tim; Brown, Irving

    2012-01-01

    This study explored computer animation vignettes as a replacement for live-action video scenarios of classroom behavior situations previously used as an instructional resource in teacher education courses in classroom management strategies. The focus of the research was to determine if the embedded behavioral information perceived in a live-action…

  14. The 3D MHD code GOEMHD3 for astrophysical plasmas with large Reynolds numbers. Code description, verification, and computational performance

    NASA Astrophysics Data System (ADS)

    Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.

    2015-08-01

    Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very

  15. Description of FEL3D: A three dimensional simulation code for TOK and FEL

    SciTech Connect

    Dutt, S.; Friedman, A.; Gover, A.

    1988-10-20

    FEL3D is a three dimensional simulation code, written for the purpose of calculating the parameters of coherent radiation emitted by electrons in an undulator. The program was written predominantly for simulating the coherent super-radiant harmonic frequency emission of electrons which are being bunched by an external laser beam while propagating in an undulator magnet. This super-radiant emission is to be studied in the TOK (transverse optical klystron) experiment, which is under construction in the NSLS department at Brookhaven National Laboratory. The program can also calculate the stimulated emission radiometric properties of a free electron laser (FEL) taking into account three dimensional effects. While this application is presently limited to the small gain operation regime of FEL's, extension to the high gain regime is expected to be relatively easy. The code is based on a semi-analytical concept. Instead of a full numerical solution of the Maxwell-Lorentz equations, the trajectories of the electron in the wiggler field are calculated analytically, and the radiation fields are expanded in terms of free space eigen-modes. This approach permits efficient computation, with a computation time of about 0.1 sec/electron on the BNL IBM 3090. The code reflects the important three dimensional features of the electron beam, the modulating laser beam, and the emitted radiation field. The statistical approach is based on averaging over the electron initial conditions according to a given distribution function in phase space, rather than via Monte-Carlo simulation. The present version of the program is written for uniform periodic wiggler field, but extension to nonuniform fields is straightforward. 4 figs., 5 tabs.

  16. Rapid, High-Throughput Tracking of Bacterial Motility in 3D via Phase-Contrast Holographic Video Microscopy

    PubMed Central

    Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck

    2015-01-01

    Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336

  17. ORBXYZ: a 3D single-particle orbit code for following charged-particle trajectories in equilibrium magnetic fields

    SciTech Connect

    Anderson, D.V.; Cohen, R.H.; Ferguson, J.R.; Johnston, B.M.; Sharp, C.B.; Willmann, P.A.

    1981-06-30

    The single particle orbit code, TIBRO, has been modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications. Many details pertaining to this code are given on microfiche accompanying this report.

  18. TOMO3D: 3-D joint refraction and reflection traveltime tomography parallel code for active-source seismic data—synthetic test

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.

    2015-10-01

    We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.

  19. Validation Studies of the Finite Orbit Width version of the CQL3D code

    NASA Astrophysics Data System (ADS)

    Petrov, Yu. V.; Harvey, R. W.

    2014-10-01

    The Finite-Orbit-Width (FOW) version of the CQL3D bounce-averaged Fokker-Planck (FP) code has been further developed and tested. The neoclassical radial transport appears naturally in this version by averaging the local collision coefficients along guiding center orbits, with a proper transformation matrix from local (R,Z) coordinates to the midplane computational coordinates, where the FP equation is solved. In a similar way, the local quasilinear rf diffusion terms give rise to additional radial transport of orbits. The main challenge is the internal boundary conditions (IBC) which add many elements into the matrix of coefficients for the solution of FPE on the computational grid, effectively making it a non-banded matrix (but still sparse). Steady state runs have been achieved at NERSC supercomputers in typically 10 time steps. Validation tests are performed for NSTX conditions, but using different scaling factors of equilibrium magnetic field, from 0.5 to 8.0. The bootstrap current calculations for ions show a reasonable agreement of current density profiles with Sauter et al. model equations which are based on 1st order expansion, although the magnitudes of currents may differ by up to 30%. Supported by USDOE grants SC0006614, ER54744, and ER44649.

  20. LINFLUX-AE: A Turbomachinery Aeroelastic Code Based on a 3-D Linearized Euler Solver

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, M. A.; Trudell, J. J.; Mehmed, O.; Stefko, G. L.

    2004-01-01

    This report describes the development and validation of LINFLUX-AE, a turbomachinery aeroelastic code based on the linearized unsteady 3-D Euler solver, LINFLUX. A helical fan with flat plate geometry is selected as the test case for numerical validation. The steady solution required by LINFLUX is obtained from the nonlinear Euler/Navier Stokes solver TURBO-AE. The report briefly describes the salient features of LINFLUX and the details of the aeroelastic extension. The aeroelastic formulation is based on a modal approach. An eigenvalue formulation is used for flutter analysis. The unsteady aerodynamic forces required for flutter are obtained by running LINFLUX for each mode, interblade phase angle and frequency of interest. The unsteady aerodynamic forces for forced response analysis are obtained from LINFLUX for the prescribed excitation, interblade phase angle, and frequency. The forced response amplitude is calculated from the modal summation of the generalized displacements. The unsteady pressures, work done per cycle, eigenvalues and forced response amplitudes obtained from LINFLUX are compared with those obtained from LINSUB, TURBO-AE, ASTROP2, and ANSYS.

  1. Simulation on a photocathode-based microtron using a 3D PIC code

    NASA Astrophysics Data System (ADS)

    Park, Sunjeong; Jeong, Young Uk; Park, Seong Hee; Jang, Kyu-Ha; Vinokurov, Nikolay A.; Kim, Eun-San

    2015-02-01

    The Korea Atomic Energy Research Institute (KAERI) has used a microtron accelerator based on a thermionic cathode for operating a compact terahertz (THz) FEL, where the electrons are emitted and accelerated automatically during the radio-frequency (RF) macro-pulse over threshold power for their emission. Usually a thermionic cathode is embedded inside the microtron cavity for electron-beam emission, and at the same time acceleration is due to the input RF source. In this case, the accelerator scheme is simple, but just a fraction of the emitted electrons are accelerated, and the electron bunch length is uncontrollable due to the RF phase condition for acceleration. In this paper, a photocathode-based microtron which is able to produce high peak (˜100 A) and ultrashort (˜1 ps) electron bunch is studied to adapt it for an electron injector of a THz generator. Especially, we analyzed the electron beam dynamics along the accelerating trajectory with a 3D PIC-code to find the optimized RF phase and laser input time.

  2. Implementation of wall boundary conditions for transpiration in F3D thin-layer Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Martin, F. W., Jr.

    1991-01-01

    Numerical boundary conditions for mass injection/suction at the wall are incorporated in the thin-layer Navier-Stokes code, F3D. The accuracy of the boundary conditions and the code is assessed by a detailed comparison of the predictions of velocity distributions and skin-friction coefficients with exact similarity solutions for laminar flow over a flat plate with variable blowing/suction, and measurements for turbulent flow past a flat plate with uniform blowing. In laminar flow, F3D predictions for friction coefficient compare well with exact similarity solution with and without suction, but produces large errors at moderate-to-large values of blowing. A slight Mach number dependence of skin-friction coefficient due to blowing in turbulent flow is computed by F3D code. Predicted surface pressures for turbulent flow past an airfoil with mass injection are in qualitative agreement with measurements for a flat plate.

  3. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  4. Semantic-preload video model based on VOP coding

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  5. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.

    2012-12-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also

  6. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  7. Unequal-period combination approach of gray code and phase-shifting for 3-D visual measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin

    2016-09-01

    Combination of Gray code and phase-shifting is the most practical and advanced approach for the structured light 3-D measurement so far, which is able to measure objects with complex and discontinuous surface. However, for the traditional combination of the Gray code and phase-shifting, the captured Gray code images are not always sharp cut-off in the black-white conversion boundaries, which may lead to wrong decoding analog code orders. Moreover, during the actual measurement, there also exists local decoding error for the wrapped analog code obtained with the phase-shifting approach. Therefore, for the traditional approach, the wrong analog code orders and the local decoding errors will consequently introduce the errors which are equivalent to a fringe period when the analog code is unwrapped. In order to avoid one-fringe period errors, we propose an approach which combines Gray code with phase-shifting according to unequal period. With theoretical analysis, we build the measurement model of the proposed approach, determine the applicable condition and optimize the Gray code encoding period and phase-shifting fringe period. The experimental results verify that the proposed approach can offer a reliable unwrapped analog code, which can be used in 3-D shape measurement.

  8. Joint-source-channel coding scheme for scalable video-coding-based digital video broadcasting, second generation satellite broadcasting system

    NASA Astrophysics Data System (ADS)

    Seo, Kwang-Deok; Chi, Won Sup; Lee, In Ki; Chang, Dae-Ig

    2010-10-01

    We propose a joint-source-channel coding (JSCC) scheme that can provide and sustain high-quality video service in spite of deteriorated transmission channel conditions of the second generation of the digital video broadcasting (DVB-S2) satellite broadcasting service. Especially by combining the layered characteristics of the SVC (scalable video coding) video and the robust channel coding capability of LDPC (low-density parity check) employed for DVB-S2, a new concept of JSCC for digital satellite broadcasting service is developed. Rain attenuation in high-frequency bands such as the Ka band is a major factor for lowering the link capacity in satellite broadcasting service. Therefore, it is necessary to devise a new technology to dynamically manage the rain attenuation by adopting a JSCC scheme that can apply variable code rates for both source and channel coding. For this purpose, we develop a JSCC scheme by combining SVC and LDPC, and prove the performance of the proposed JSCC scheme by extensive simulations where SVC coded video is transmitted over various error-prone channels with AWGN (additive white Gaussian noise) patterns in DVB-S2 broadcasting service.

  9. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  10. Perceptual vector quantization for video coding

    NASA Astrophysics Data System (ADS)

    Valin, Jean-Marc; Terriberry, Timothy B.

    2015-03-01

    This paper applies energy conservation principles to the Daala video codec using gain-shape vector quantization to encode a vector of AC coefficients as a length (gain) and direction (shape). The technique originates from the CELT mode of the Opus audio codec, where it is used to conserve the spectral envelope of an audio signal. Conserving energy in video has the potential to preserve textures rather than low-passing them. Explicitly quantizing a gain allows a simple contrast masking model with no signaling cost. Vector quantizing the shape keeps the number of degrees of freedom the same as scalar quantization, avoiding redundancy in the representation. We demonstrate how to predict the vector by transforming the space it is encoded in, rather than subtracting off the predictor, which would make energy conservation impossible. We also derive an encoding of the vector-quantized codewords that takes advantage of their non-uniform distribution. We show that the resulting technique outperforms scalar quantization by an average of 0.90 dB on still images, equivalent to a 24.8% reduction in bitrate at equal quality, while for videos, the improvement averages 0.83 dB, equivalent to a 13.7% reduction in bitrate.

  11. Interaction and behaviour imaging: a novel method to measure mother-infant interaction using video 3D reconstruction.

    PubMed

    Leclère, C; Avril, M; Viaux-Savelon, S; Bodeau, N; Achard, C; Missonnier, S; Keren, M; Feldman, R; Chetouani, M; Cohen, D

    2016-05-24

    Studying early interaction is essential for understanding development and psychopathology. Automatic computational methods offer the possibility to analyse social signals and behaviours of several partners simultaneously and dynamically. Here, 20 dyads of mothers and their 13-36-month-old infants were videotaped during mother-infant interaction including 10 extremely high-risk and 10 low-risk dyads using two-dimensional (2D) and three-dimensional (3D) sensors. From 2D+3D data and 3D space reconstruction, we extracted individual parameters (quantity of movement and motion activity ratio for each partner) and dyadic parameters related to the dynamics of partners heads distance (contribution to heads distance), to the focus of mutual engagement (percentage of time spent face to face or oriented to the task) and to the dynamics of motion activity (synchrony ratio, overlap ratio, pause ratio). Features are compared with blind global rating of the interaction using the coding interactive behavior (CIB). We found that individual and dyadic parameters of 2D+3D motion features perfectly correlates with rated CIB maternal and dyadic composite scores. Support Vector Machine classification using all 2D-3D motion features classified 100% of the dyads in their group meaning that motion behaviours are sufficient to distinguish high-risk from low-risk dyads. The proposed method may present a promising, low-cost methodology that can uniquely use artificial technology to detect meaningful features of human interactions and may have several implications for studying dyadic behaviours in psychiatry. Combining both global rating scales and computerized methods may enable a continuum of time scale from a summary of entire interactions to second-by-second dynamics.

  12. Interaction and behaviour imaging: a novel method to measure mother–infant interaction using video 3D reconstruction

    PubMed Central

    Leclère, C; Avril, M; Viaux-Savelon, S; Bodeau, N; Achard, C; Missonnier, S; Keren, M; Feldman, R; Chetouani, M; Cohen, D

    2016-01-01

    Studying early interaction is essential for understanding development and psychopathology. Automatic computational methods offer the possibility to analyse social signals and behaviours of several partners simultaneously and dynamically. Here, 20 dyads of mothers and their 13–36-month-old infants were videotaped during mother–infant interaction including 10 extremely high-risk and 10 low-risk dyads using two-dimensional (2D) and three-dimensional (3D) sensors. From 2D+3D data and 3D space reconstruction, we extracted individual parameters (quantity of movement and motion activity ratio for each partner) and dyadic parameters related to the dynamics of partners heads distance (contribution to heads distance), to the focus of mutual engagement (percentage of time spent face to face or oriented to the task) and to the dynamics of motion activity (synchrony ratio, overlap ratio, pause ratio). Features are compared with blind global rating of the interaction using the coding interactive behavior (CIB). We found that individual and dyadic parameters of 2D+3D motion features perfectly correlates with rated CIB maternal and dyadic composite scores. Support Vector Machine classification using all 2D–3D motion features classified 100% of the dyads in their group meaning that motion behaviours are sufficient to distinguish high-risk from low-risk dyads. The proposed method may present a promising, low-cost methodology that can uniquely use artificial technology to detect meaningful features of human interactions and may have several implications for studying dyadic behaviours in psychiatry. Combining both global rating scales and computerized methods may enable a continuum of time scale from a summary of entire interactions to second-by-second dynamics. PMID:27219342

  13. Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code. Volume 2; Scattering Plots

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This second volume of Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code provides the scattering plots referenced by Volume 1. There are 648 plots. Half are for the 8750 rpm "high speed" operating condition and the other half are for the 7031 rpm "mid speed" operating condition.

  14. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Ranero, C. R.

    2012-04-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also

  15. Fast mode decision for 3D-HEVC depth intracoding.

    PubMed

    Zhang, Qiuwen; Li, Nana; Wu, Qinggang

    2014-01-01

    The emerging international standard of high efficiency video coding based 3D video coding (3D-HEVC) is a successor to multiview video coding (MVC). In 3D-HEVC depth intracoding, depth modeling mode (DMM) and high efficiency video coding (HEVC) intraprediction mode are both employed to select the best coding mode for each coding unit (CU). This technique achieves the highest possible coding efficiency, but it results in extremely large encoding time which obstructs the 3D-HEVC from practical application. In this paper, a fast mode decision algorithm based on the correlation between texture video and depth map is proposed to reduce 3D-HEVC depth intracoding computational complexity. Since the texture video and its associated depth map represent the same scene, there is a high correlation among the prediction mode from texture video and depth map. Therefore, we can skip some specific depth intraprediction modes rarely used in related texture CU. Experimental results show that the proposed algorithm can significantly reduce computational complexity of 3D-HEVC depth intracoding while maintaining coding efficiency. PMID:24963512

  16. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  17. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance.

    PubMed

    Qiu, Jimmy; Hope, Andrew J; Cho, B C John; Sharpe, Michael B; Dickie, Colleen I; DaCosta, Ralph S; Jaffray, David A; Weersink, Robert A

    2012-10-21

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ∼2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  18. Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA

    SciTech Connect

    Carbajo, Juan J; Qualls, A L

    2008-01-01

    The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the

  19. Magnetotelluric 3-D inversion—a review of two successful workshops on forward and inversion code testing and comparison

    NASA Astrophysics Data System (ADS)

    Miensopust, Marion P.; Queralt, Pilar; Jones, Alan G.; 3D MT modellers

    2013-06-01

    Over the last half decade the need for, and importance of, three-dimensional (3-D) modelling of magnetotelluric (MT) data have increased dramatically and various 3-D forward and inversion codes are in use and some have become commonly available. Comparison of forward responses and inversion results is an important step for code testing and validation prior to `production' use. The various codes use different mathematical approximations to the problem (finite differences, finite elements or integral equations), various orientations of the coordinate system, different sign conventions for the time dependence and various inversion strategies. Additionally, the obtained results are dependent on data analysis, selection and correction as well as on the chosen mesh, inversion parameters and regularization adopted, and therefore, a careful and knowledge-based use of the codes is essential. In 2008 and 2011, during two workshops at the Dublin Institute for Advanced Studies over 40 people from academia (scientists and students) and industry from around the world met to discuss 3-D MT inversion. These workshops brought together a mix of code writers as well as code users to assess the current status of 3-D modelling, to compare the results of different codes, and to discuss and think about future improvements and new aims in 3-D modelling. To test the numerical forward solutions, two 3-D models were designed to compare the responses obtained by different codes and/or users. Furthermore, inversion results of these two data sets and two additional data sets obtained from unknown models (secret models) were also compared. In this manuscript the test models and data sets are described (supplementary files are available) and comparisons of the results are shown. Details regarding the used data, forward and inversion parameters as well as computational power are summarized for each case, and the main discussion points of the workshops are reviewed. In general, the responses

  20. Early SKIP mode decision for three-dimensional high efficiency video coding using spatial and interview correlations

    NASA Astrophysics Data System (ADS)

    Zhang, Qiuwen; Wu, Qinggang; Wang, Xiaobing; Gan, Yong

    2014-09-01

    In the test model of high efficiency video coding (HEVC) standard-based three-dimensional (3-D) video coding (3-D-HEVC), the variable size motion estimation (ME) and disparity estimation (DE) have been employed to select the best coding mode for each treeblock in the encoding process. This technique achieves the highest possible coding efficiency, but it brings extremely high computational complexity that limits 3-D-HEVC from practical applications. An early SKIP mode decision algorithm based on spatial and interview correlations is proposed to reduce the computational complexity of the ME/DE procedures. The basic idea of the method is to utilize the spatial and interview properties of coding information in previous coded frames to predict the current treeblock prediction mode and early skip unnecessary variable-size ME and DE. Experimental results show that the proposed algorithm can significantly reduce computational complexity of 3-D-HEVC while maintaining nearly the same rate distortion performance as the original encoder.

  1. DCM3D: A dual-continuum, three-dimensional, ground-water flow code for unsaturated, fractured, porous media

    SciTech Connect

    Updegraff, C.D. ); Lee, C.E. ); Gallegos, D.P. )

    1991-02-01

    This report constitutes the user's manual for DCM3D. DCM3D is a computer code for solving three-dimensional, ground-water flow problems in variably saturated, fractured porous media. The code is based on a dual-continuum model with porous media comprising one continuum and fractures comprising the other. The continua are connected by a transfer term that depends on the unsaturated permeability of the porous medium. An integrated finite-difference scheme is used to discretize the governing equations in space. The time-dependent term is allowed to remain continuous. The resulting set of ordinary differential equations (ODE's) is solved with a general ODE solver, LSODES. The code is capable of handling transient, spatially dependent source terms and boundary conditions. The boundary conditions can either prescribed head or prescribed flux. 24 refs., 22 figs., 5 tabs.

  2. Validation of the BISON 3D Fuel Performance Code: Temperature Comparisons for Concentrically and Eccentrically Located Fuel Pellets

    SciTech Connect

    J. D. Hales; D. M. Perez; R. L. Williamson; S. R. Novascone; B. W. Spencer

    2013-03-01

    BISON is a modern finite-element based nuclear fuel performance code that has been under development at the Idaho National Laboratory (USA) since 2009. The code is applicable to both steady and transient fuel behaviour and is used to analyse either 2D axisymmetric or 3D geometries. BISON has been applied to a variety of fuel forms including LWR fuel rods, TRISO-coated fuel particles, and metallic fuel in both rod and plate geometries. Code validation is currently in progress, principally by comparison to instrumented LWR fuel rods. Halden IFA experiments constitute a large percentage of the current BISON validation base. The validation emphasis here is centreline temperatures at the beginning of fuel life, with comparisons made to seven rods from the IFA-431 and 432 assemblies. The principal focus is IFA-431 Rod 4, which included concentric and eccentrically located fuel pellets. This experiment provides an opportunity to explore 3D thermomechanical behaviour and assess the 3D simulation capabilities of BISON. Analysis results agree with experimental results showing lower fuel centreline temperatures for eccentric fuel with the peak temperature shifted from the centreline. The comparison confirms with modern 3D analysis tools that the measured temperature difference between concentric and eccentric pellets is not an artefact and provides a quantitative explanation for the difference.

  3. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  4. Efficient multiview depth video coding using depth synthesis prediction

    NASA Astrophysics Data System (ADS)

    Lee, Cheon; Choi, Byeongho; Ho, Yo-Sung

    2011-07-01

    The view synthesis prediction (VSP) method utilizes interview correlations between views by generating an additional reference frame in the multiview video coding. This paper describes a multiview depth video coding scheme that incorporates depth view synthesis and additional prediction modes. In the proposed scheme, we exploit the reconstructed neighboring depth frame to generate an additional reference depth image for the current viewpoint to be coded using the depth image-based-rendering technique. In order to generate high-quality reference depth images, we used pre-processing on depth, depth image warping, and two types of hole filling methods depending on the number of available reference views. After synthesizing the additional depth image, we encode the depth video using the proposed additional prediction modes named VSP modes; those additional modes refer to the synthesized depth image. In particular, the VSP_SKIP mode refers to the co-located block of the synthesized frame without the coding motion vectors and residual data, which gives most of the coding gains. Experimental results demonstrate that the proposed depth view synthesis method provides high-quality depth images for the current view and the proposed VSP modes provide high coding gains, especially on the anchor frames.

  5. Video based lifting technique coding system.

    PubMed

    Hsiang, S M; Brogmus, G E; Martin, S E; Bezverkhny, I B

    1998-03-01

    Despite automation and improved working conditions, many materials in industry are still handled manually. Among the basic activities involved in manual materials handling, lifting is the one most frequently associated with low-back pain (LBP). Biomechanical analysis techniques have been used to better understand the risk factors associated with manual handling, but because these techniques require specialized equipment, highly trained personnel, and interfere with normal business operations, they are limited in their usefulness. A video based lifting technique analysis system (the VidLiTeCTM System) is presented that provides for quantifiable non-invasive biomechanical analysis of the dynamic features of lifting with high inter-coder reliability and low sensitivity to absolute errors. Analysis of results from a laboratory experiment and from field-collected videotape are described that support the reliability, sensitivity, and accuracy claims of the VidLiTeCTM System. The VidLiTeCTM System allows technicians with minimal training and low-tech equipment (a camcorder) to collect large sets of lifting data without interfering with normal business operations. A reasonably accurate estimate of the peak compressive force on the L5/S1 joint can be made from the data collected. Such a system can be used to collect quantified data on lifting techniques that can be related to LBP reporting.

  6. Recent Hydrodynamics Improvements to the RELAP5-3D Code

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz

    2009-07-01

    The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.

  7. Finite Element Code For 3D-Hydraulic Fracture Propagation Equations (3-layer).

    1992-03-24

    HYFRACP3D is a finite element program for simulation of a pseudo three-dimensional fracture geometries with a two-dimensional planar solution. The model predicts the height, width and winglength over time for a hydraulic fracture propagating in a three-layered system of rocks with variable rock mechanics properties.

  8. Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca; Anderson, Stanwood L.

    2009-08-01

    The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.

  9. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  10. An investigation of dehazing effects on image and video coding.

    PubMed

    Gibson, Kristofor B; Võ, Dung T; Nguyen, Truong Q

    2012-02-01

    This paper makes an investigation of the dehazing effects on image and video coding for surveillance systems. The goal is to achieve good dehazed images and videos at the receiver while sustaining low bitrates (using compression) in the transmission pipeline. At first, this paper proposes a novel method for single-image dehazing, which is used for the investigation. It operates at a faster speed than current methods and can avoid halo effects by using the median operation. We then consider the dehazing effects in compression by investigating the coding artifacts and motion estimation in cases of applying any dehazing method before or after compression. We conclude that better dehazing performance with fewer artifacts and better coding efficiency is achieved when the dehazing is applied before compression. Simulations for Joint Photographers Expert Group images in addition to subjective and objective tests with H.264 compressed sequences validate our conclusion. PMID:21896391

  11. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  12. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  13. Implementation of a 3D mixing layer code on parallel computers

    NASA Technical Reports Server (NTRS)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  14. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  15. European Pressurized water Reactor (EPR) SAR ATWS Accident Analyses by using 3D Code Internal Coupling Method

    SciTech Connect

    Gagner, Renata; Lafitte, Helene; Dormeau, Pascal; Stoudt, Roger H.

    2004-07-01

    Anticipated Transients Without Scram (ATWS) accident analyses make part of the Safety Analysis Report of the European Pressurized water Reactor (EPR), covering Risk Reduction Category A (Core Melt Prevention) events. This paper deals with three of the most penalizing RRC-A sequences of ATWS caused by mechanical blockage of the control/shutdown rods, regarding their consequences on the Reactor Coolant System (RCS) and core integrity. A new 3D code internal coupling calculation method has been introduced. (authors)

  16. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  17. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  18. Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields

    NASA Technical Reports Server (NTRS)

    Tannehill, J. C.; Wadawadigi, G.

    1992-01-01

    Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the

  19. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  20. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code

    1998-06-12

    TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less

  1. Scalable hologram video coding for adaptive transmitting service.

    PubMed

    Seo, Young-Ho; Lee, Yoon-Hyuk; Yoo, Ji-Sang; Kim, Dong-Wook

    2013-01-01

    This paper discusses processing techniques for an adaptive digital holographic video service in various reconstruction environments, and proposes two new scalable coding schemes. The proposed schemes are constructed according to the hologram generation or acquisition schemes: hologram-based resolution-scalable coding (HRS) and light source-based signal-to-noise ratio scalable coding (LSS). HRS is applied for holograms that are already acquired or generated, while LSS is applied to the light sources before generating digital holograms. In the LSS scheme, the light source information is lossless coded because it is too important to lose, while the HRS scheme adopts a lossy coding method. In an experiment, we provide eight stages of an HRS scheme whose data compression ratios range from 1:1 to 100:1 for each layered data. For LSS, four layers and 16 layers of scalable coding schemes are provided. We experimentally show that the proposed techniques make it possible to service a digital hologram video adaptively to the various displays with different resolutions, computation capabilities of the receiver side, or bandwidths of the network.

  2. Users manual for CAFE-3D : a computational fluid dynamics fire code.

    SciTech Connect

    Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma

    2005-03-01

    The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.

  3. Fast wave current drive modeling using the combined RANT3D and PICES Codes

    NASA Astrophysics Data System (ADS)

    Jaeger, E. F.; Murakami, M.; Stallings, D. C.; Carter, M. D.; Wang, C. Y.; Galambos, J. D.; Batchelor, D. B.; Baity, F. W.; Bell, G. L.; Wilgen, J. B.; Chiu, S. C.; DeGrassie, J. S.; Forest, C. B.; Kupfer, K.; Petty, C. C.; Pinsker, R. T.; Prater, R.; Lohr, J.; Lee, K. M.

    1996-02-01

    Two numerical codes are combined to give a theoretical estimate of the current drive and direct electron heating by fast waves launched from phased antenna arrays on the DIII-D tokamak. Results are compared with experiment.

  4. Version 3.0 of code Java for 3D simulation of the CCA model

    NASA Astrophysics Data System (ADS)

    Zhang, Kebo; Zuo, Junsen; Dou, Yifeng; Li, Chao; Xiong, Hailing

    2016-10-01

    In this paper we provide a new version of program for replacing the previous version. The frequency of traversing the clusters-list was reduced, and some code blocks were optimized properly; in addition, we appended and revised the comments of the source code for some methods or attributes. The compared experimental results show that new version has better time efficiency than the previous version.

  5. A Watermarking Scheme for High Efficiency Video Coding (HEVC)

    PubMed Central

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate. PMID:25144455

  6. A watermarking scheme for High Efficiency Video Coding (HEVC).

    PubMed

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate.

  7. Unsteady Analysis of Inlet-Compressor Acoustic Interactions Using Coupled 3-D and 1-D CFD Codes

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Cole, G. L.

    2000-01-01

    It is well known that the dynamic response of a mixed compression supersonic inlet is very sensitive to the boundary condition imposed at the subsonic exit (engine face) of the inlet. In previous work, a 3-D computational fluid dynamics (CFD) inlet code (NPARC) was coupled at the engine face to a 3-D turbomachinery code (ADPAC) simulating an isolated rotor and the coupled simulation used to study the unsteady response of the inlet. The main problem with this approach is that the high fidelity turbomachinery simulation becomes prohibitively expensive as more stages are included in the simulation. In this paper, an alternative approach is explored, wherein the inlet code is coupled to a lesser fidelity 1-D transient compressor code (DYNTECC) which simulates the whole compressor. The specific application chosen for this evaluation is the collapsing bump experiment performed at the University of Cincinnati, wherein reflections of a large-amplitude acoustic pulse from a compressor were measured. The metrics for comparison are the pulse strength (time integral of the pulse amplitude) and wave form (shape). When the compressor is modeled by stage characteristics the computed strength is about ten percent greater than that for the experiment, but the wave shapes are in poor agreement. An alternate approach that uses a fixed rise in duct total pressure and temperature (so-called 'lossy' duct) to simulate a compressor gives good pulse shapes but the strength is about 30 percent low.

  8. Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER

    NASA Astrophysics Data System (ADS)

    Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena

    2015-11-01

    Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.

  9. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  10. Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code. Volume 1; Analysis and Results

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This report provides a study of rotor and stator scattering using the SOURCE3D Rotor Wake/Stator Interaction Code. SOURCE3D is a quasi-three-dimensional computer program that uses three-dimensional acoustics and two-dimensional cascade load response theory to calculate rotor and stator modal reflection and transmission (scattering) coefficients. SOURCE3D is at the core of the TFaNS (Theoretical Fan Noise Design/Prediction System), developed for NASA, which provides complete fully coupled (inlet, rotor, stator, exit) noise solutions for turbofan engines. The reason for studying scattering is that we must first understand the behavior of the individual scattering coefficients provided by SOURCE3D, before eventually understanding the more complicated predictions from TFaNS. To study scattering, we have derived a large number of scattering curves for vane and blade rows. The curves are plots of output wave power divided by input wave power (in dB units) versus vane/blade ratio. Some of these plots are shown in this report. All of the plots are provided in a separate volume. To assist in understanding the plots, formulas have been derived for special vane/blade ratios for which wavefronts are either parallel or normal to rotor or stator chords. From the plots, we have found that, for the most part, there was strong transmission and weak reflection over most of the vane/blade ratio range for the stator. For the rotor, there was little transmission loss.

  11. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  12. Numerical simulation of jet aerodynamics using the three-dimensional Navier-Stokes code PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Abdol-Hamid, Khaled S.

    1996-01-01

    This report presents a unified method for subsonic and supersonic jet analysis using the three-dimensional Navier-Stokes code PAB3D. The Navier-Stokes code was used to obtain solutions for axisymmetric jets with on-design operating conditions at Mach numbers ranging from 0.6 to 3.0, supersonic jets containing weak shocks and Mach disks, and supersonic jets with nonaxisymmetric nozzle exit geometries. This report discusses computational methods, code implementation, computed results, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions. The Navier-Stokes method using the standard Jones-Launder two-equation kappa-epsilon turbulence model can accurately predict jet flow, and such predictions are made without any modification to the published constants for the turbulence model.

  13. Parallel 3-D Electromagnetic Particle Code Using High Performance FORTRAN: Parallel TRISTAN

    NASA Astrophysics Data System (ADS)

    Cai, D.; Li, Y.; Nishikawa, K.-I.; et al.

    A three-dimensional full electromagnetic particle-in-cell (PIC ) code, TRISTAN (Tridimensional Stanford) code, has been parallelized using High Performance Fortran (HPF) as a RPM (Real Parallel Machine). In the parallelized HPF code, the simulation domain is decomposed in one-dimension, and both the particle and field data located in each domain that we call the sub-domain are distributed on each processor. Both the particle and field data on a sub-domain are needed by the neighbor sub-domains and thus communications between the sub-domains are inevitable. Our simulation results using HPF exhibit the promising applicability of the HPF communications to a large scale scientific computing such as solar wind-magnetosphere interactions.

  14. A 3D-PNS computer code for the calculation of supersonic combusting flows

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit; Northam, G. Burton

    1988-01-01

    A computer code has been developed based on the three-dimensional parabolized Navier-Stokes (PNS) equations which govern the supersonic combusting flow of the hydrogen-air system. The finite difference algorithm employed was a hybrid of the Schiff-Steger algorithm and the Vigneron, et al., algorithm which is fully implicit and fully coupled. The combustion of hydrogen and air was modeled by the finite-rate two-step combustion model of Rogers-Chinitz. A new dependent variable vector was introduced to simplify the numerical algorithm. Robustness of the algorithm was considerably enhanced by introducing an adjustable parameter. The computer code was used to solve a premixed shock-induced combustion problem and the results were compared with those of a full Navier-Stokes code. Reasonably good agreement was obtained at a fraction of the cost of the full Navier-Stokes procedure.

  15. 3-D kinetics simulations of the NRU reactor using the DONJON code

    SciTech Connect

    Leung, T. C.; Atfield, M. D.; Koclas, J.

    2006-07-01

    The NRU reactor is highly heterogeneous, heavy-water cooled and moderated, with online refuelling capability. It is licensed to operate at a maximum power of 135 MW, with a peak thermal flux of approximately 4.0 x 10{sup 18} n.m{sup -2} . s{sup -1}. In support of the safe operation of NRU, three-dimensional kinetics calculations for reactor transients have been performed using the DONJON code. The code was initially designed to perform space-time kinetics calculations for the CANDU{sup R} power reactors. This paper describes how the DONJON code can be applied to perform neutronic simulations for the analysis of reactor transients in NRU, and presents calculation results for some transients. (authors)

  16. Solar wind-magnetosphere interaction as simulated by a 3D, EM particle code

    NASA Technical Reports Server (NTRS)

    Buneman, O.; Nishikawa, Ken-Ichi; Neubert, T.

    1993-01-01

    The results of simulating the solar wind-magnetosphere interaction with a three dimensional, electromagnetic (EM) particle code are presented. Hitherto such global simulations were done with magnetohydrodynamic (MHD) codes while lower dimensional particle or hybrid codes served to account for microscopic processes and such transport parameters as have to be introduced ad hoc in MHD. The kinetic model combines macroscopic and microscopic tasks. It relies only on the Maxwell curl equations and the Lorentz equation for particles. The preliminary results are for an unmagnetized solar wind plasma streaming past a dipolar magnetic field. The results show the formation of a bow shock and a magnetotail, the penetration of energetic particles into cusp and radiation belt regions, and dawn to dusk asymmetries.

  17. Solar wind-magnetosphere interaction as simulated by a 3-D EM particle code

    NASA Technical Reports Server (NTRS)

    Buneman, Oscar; Neubert, Torsten; Nishikawa, Ken-Ichi

    1992-01-01

    We present here our first results of simulating the solar wind-magnetosphere interaction with a new three-dimensional electromagnetic particle code. Hitherto such global simulations were done with MHD codes while lower-dimensional particle or hybrid codes served to account for microscopic processes and such transport parameters as have to be introduced ad hoc in MHD. Our kinetic model attempts to combine the macroscopic and microscopic tasks. It relies only on the Maxwell curl equation and the Lorentz equation for particles, which are ideally suited for computers. The preliminary results shown here are for an unmagnetized solar wind plasma streaming past a dipolar magnetic field. The results show the formation of a bow shock and a magnetotail, the penetration of energetic particles into cusp and radiation belt regions, and dawn-dusk asymmetries.

  18. Code System for 2-Group, 3D Neutronic Kinetics Calculations Coupled to Core Thermal Hydraulics.

    2000-05-12

    Version 00 QUARK is a combined computer program comprising a revised version of the QUANDRY three-dimensional, two-group neutron kinetics code and an upgraded version of the COBRA transient core analysis code (COBRA-EN). Starting from either a critical steady-state (k-effective or critical dilute Boron problem) or a subcritical steady-state (fixed source problem) in a PWR plant, the code allows one to simulate the neutronic and thermal-hydraulic core transient response to reactivity accidents initiated both inside themore » vessel (such as a control rod ejection) and outside the vessel (such as the sudden change of the Boron concentration in the coolant). QUARK output can be used as input to PSR-470/NORMA-FP to perform a subchannel analysis from converged coarse-mesh nodal solutions.« less

  19. Far field 3D localization of radioactive hot spots using a coded aperture camera.

    PubMed

    Shifeng, Sun; Zhiming, Zhang; Lei, Shuai; Daowu, Li; Yingjie, Wang; Yantao, Liu; Xianchao, Huang; Haohui, Tang; Ting, Li; Pei, Chai; Yiwen, Zhang; Wei, Zhou; Mingjie, Yang; Cunfeng, Wei; Chuangxin, Ma; Long, Wei

    2016-01-01

    This paper presents a coded aperture method to remotely estimate the radioactivity of a source. The activity is estimated from the detected counts and the estimated source location, which is extracted by factoring the effect of aperture magnification. A 6mm thick tungsten-copper alloy coded aperture mask is used to modulate the incoming gamma-rays. The location of point and line sources in all three dimensions was estimated with an accuracy of less than 10% when the source-camera distance was about 4 m. The estimated activities were 17.6% smaller and 50.4% larger than the actual activities for the point and line sources, respectively.

  20. Benchmarking of 3D space charge codes using direct phase space measurements from photoemission high voltage dc gun

    NASA Astrophysics Data System (ADS)

    Bazarov, Ivan V.; Dunham, Bruce M.; Gulliford, Colwyn; Li, Yulin; Liu, Xianghong; Sinclair, Charles K.; Soong, Ken; Hannon, Fay

    2008-10-01

    We present a comparison between space charge calculations and direct measurements of the transverse phase space of space charge dominated electron bunches from a high voltage dc photoemission gun followed by an emittance compensation solenoid magnet. The measurements were performed using a double-slit emittance measurement system over a range of bunch charge and solenoid current values. The data are compared with detailed simulations using the 3D space charge codes GPT and Parmela3D. The initial particle distributions were generated from measured transverse and temporal laser beam profiles at the photocathode. The beam brightness as a function of beam fraction is calculated for the measured phase space maps and found to approach within a factor of 2 the theoretical maximum set by the thermal energy and the accelerating field at the photocathode.

  1. A Methodology to Validate 3-D Arbitrary Lagrangian Eulerian Codes with Applications to Alegra

    SciTech Connect

    Chhabildas, L.C.; Duggins, B.D.; Konrad, C.H.; Mosher, D.A.; Perry, J.S.; Reinhart, W.D.; Summers, R.M.; Trucano, T.G.

    1998-11-04

    In this study we provided an experimental test bed for validating features of the Arbitrary Lagrangian Eulerian Grid for Research Applications (ALEGRA) code over a broad range of strain rates with overlapping diagnostics that encompass the multiple responses. A unique feature of the ALEGRA code is that it allows simultaneous computational treatment, within one code, of a wide range of strain-rates varying from hydrodynamic to structural conditions. This range encompasses strain rates characteristic of shock-wave propagation (107/s) and those characteristics of structural response (102/s). Most previous code validation experimental &udies, however, have been restricted to simulating or investigating a single strain-rate regime. What is new and different in this investigation is that we have performed well-controlled and well-instrumented experiments, which capture features relevant to both hydrodynamic and structural response in a single experiment. Aluminum was chosen for use in this study because it is a well-characterized material. The current experiments span strain rate regimes of over 107/s to less than 102/s in a single experiment. The input conditions were extremely well defined. Velocity interferometers were used to record the high' strain-rate response, while low strain rate data were collected using strain gauges. Although the current tests were conducted at a nominal velocity of - 1.5 km/s, it is the test methodology that is being emphasized herein. Results of a three-dimensional experiment are also presented.

  2. Measuring Video Quality on Full Scalability of H.264/AVC Scalable Video Coding

    NASA Astrophysics Data System (ADS)

    Kim, Cheon Seog; Jin, Sung Ho; Seo, Doug Jun; Ro, Yong Man

    In heterogeneous network environments, it is mandatory to measure the grade of the video quality in order to guarantee the optimal quality of the video streaming service. Quality of Service (QoS) has become a key issue for service acceptability and user satisfaction. Although there have been many recent works regarding video quality, most of them have been limited to measuring quality within temporal and Signal-to-Noise Ratio (SNR) scalability. H.264/AVC Scalable Video Coding (SVC) has emerged and has been developed to support full scalability. This includes spatial, temporal, and SNR scalability, each of which shows different visual effects. The aim of this paper is to define and develop a novel video quality metric allowing full scalability. It focuses on the effect of frame rate, SNR, the change of spatial resolution, and motion characteristics using subjective quality assessment. Experimental results show the proposed quality metric has a high correlation to subjective quality and that it is useful in determining the video quality of SVC.

  3. Drug-laden 3D biodegradable label using QR code for anti-counterfeiting of drugs.

    PubMed

    Fei, Jie; Liu, Ran

    2016-06-01

    Wiping out counterfeit drugs is a great task for public health care around the world. The boost of these drugs makes treatment to become potentially harmful or even lethal. In this paper, biodegradable drug-laden QR code label for anti-counterfeiting of drugs is proposed that can provide the non-fluorescence recognition and high capacity. It is fabricated by the laser cutting to achieve the roughness over different surface which causes the difference in the gray levels on the translucent material the QR code pattern, and the micro mold process to obtain the drug-laden biodegradable label. We screened biomaterials presenting the relevant conditions and further requirements of the package. The drug-laden microlabel is on the surface of the troches or the bottom of the capsule and can be read by a simple smartphone QR code reader application. Labeling the pill directly and decoding the information successfully means more convenient and simple operation with non-fluorescence and high capacity in contrast to the traditional methods. PMID:27040262

  4. Introduction and guide to LLNL's relativistic 3-D nuclear hydrodynamics code

    SciTech Connect

    Zingman, J.A.; McAbee, T.L.; Alonso, C.T.; Wilson, J.R.

    1987-11-01

    We have constructed a relativistic hydrodynamic model to investigate Bevalac and higher energy, heavy-ion collisions. The basis of the model is a finite-difference solution to covariant hydrodynamics, which will be described in the rest of this paper. This paper also contains: a brief review of the equations and numerical methods we have employed in the solution to the hydrodynamic equations, a detailed description of several of the most important subroutines, and a numerical test on the code. 30 refs., 8 figs., 1 tab.

  5. Complexity control for high-efficiency video coding by coding layers complexity allocations

    NASA Astrophysics Data System (ADS)

    Fang, Jiunn-Tsair; Liang, Kai-Wen; Chen, Zong-Yi; Hsieh, Wei; Chang, Pao-Chi

    2016-03-01

    The latest video compression standard, high-efficiency video coding (HEVC), provides quad-tree structures of coding units (CUs) and four coding tree depths to facilitate coding efficiency. The HEVC encoder considerably increases the computational complexity to levels inappropriate for video applications of power-constrained devices. This work, therefore, proposes a complexity control method for the low-delay P-frame configuration of the HEVC encoder. The complexity control mechanism is among the group of pictures layer, frame layer, and CU layer, and each coding layer provides a distinct method for complexity allocation. Furthermore, the steps in the prediction unit encoding procedure are reordered. By allocating the complexity to each coding layer of HEVC, the proposed method can simultaneously satisfy the entire complexity constraint (ECC) for entire sequence encoding and the instant complexity constraint (ICC) for each frame during real-time encoding. Experimental results showed that as the target complexity under both the ECC and ICC was reduced to 80% and 60%, respectively, the decrease in the average Bjøntegaard delta peak signal-to-noise ratio was ˜0.1 dB with an increase of 1.9% in the Bjøntegaard delta rate, and the complexity control error was ˜4.3% under the ECC and 4.3% under the ICC.

  6. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  7. CFD Code Calibration and Inlet-Fairing Effects On a 3D Hypersonic Powered-Simulation Model

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    A three-dimensional (3D) computational study has been performed addressing issues related to the wind tunnel testing of a hypersonic powered-simulation model. The study consisted of three objectives. The first objective was to calibrate a state-of-the-art computational fluid dynamics (CFD) code in its ability to predict hypersonic powered-simulation flows by comparing CFD solutions with experimental surface pressure data. Aftbody lower surface pressures were well predicted, but lower surface wing pressures were less accurately predicted. The second objective was to determine the 3D effects on the aftbody created by fairing over the inlet; this was accomplished by comparing the CFD solutions of two closed-inlet powered configurations with a flowing- inlet powered configuration. Although results at four freestream Mach numbers indicate that the exhaust plume tends to isolate the aftbody surface from most forebody flow- field differences, a smooth inlet fairing provides the least aftbody force and moment variation compared to a flowing inlet. The final objective was to predict and understand the 3D characteristics of exhaust plume development at selected points on a representative flight path. Results showed a dramatic effect of plume expansion onto the wings as the freestream Mach number and corresponding nozzle pressure ratio are increased.

  8. Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery

    2015-01-01

    In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.

  9. A Dynamic 3D Graphical Representation for RNA Structure Analysis and Its Application in Non-Coding RNA Classification

    PubMed Central

    Dong, Xiaoqing; Fang, Yiliang; Wang, Kejing; Zhu, Lijuan; Wang, Ke; Huang, Tao

    2016-01-01

    With the development of new technologies in transcriptome and epigenetics, RNAs have been identified to play more and more important roles in life processes. Consequently, various methods have been proposed to assess the biological functions of RNAs and thus classify them functionally, among which comparative study of RNA structures is perhaps the most important one. To measure the structural similarity of RNAs and classify them, we propose a novel three dimensional (3D) graphical representation of RNA secondary structure, in which an RNA secondary structure is first transformed into a characteristic sequence based on chemical property of nucleic acids; a dynamic 3D graph is then constructed for the characteristic sequence; and lastly a numerical characterization of the 3D graph is used to represent the RNA secondary structure. We tested our algorithm on three datasets: (1) Dataset I consisting of nine RNA secondary structures of viruses, (2) Dataset II consisting of complex RNA secondary structures including pseudo-knots, and (3) Dataset III consisting of 18 non-coding RNA families. We also compare our method with other nine existing methods using Dataset II and III. The results demonstrate that our method is better than other methods in similarity measurement and classification of RNA secondary structures. PMID:27213271

  10. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    SciTech Connect

    Banas, A.O.; Carver, M.B.; Unrau, D.

    1995-09-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.

  11. Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code

    SciTech Connect

    Blaise, P.; Colomba, A.

    2012-07-01

    The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)

  12. Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber

    NASA Astrophysics Data System (ADS)

    Yuen, A.; Bombardelli, F. A.

    2014-12-01

    Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on

  13. DISCO: A 3D Moving-mesh Magnetohydrodynamics Code Designed for the Study of Astrophysical Disks

    NASA Astrophysics Data System (ADS)

    Duffell, Paul C.

    2016-09-01

    This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide variety of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.

  14. Development of Scientific Simulation 3D Full Wave ICRF Code for Stellarators and Heating/CD Scenarios Development

    SciTech Connect

    Vdovin V.L.

    2005-08-15

    In this report we describe theory and 3D full wave code description for the wave excitation, propagation and absorption in 3-dimensional (3D) stellarator equilibrium high beta plasma in ion cyclotron frequency range (ICRF). This theory forms a basis for a 3D code creation, urgently needed for the ICRF heating scenarios development for the operated LHD, constructed W7-X, NCSX and projected CSX3 stellarators, as well for re evaluation of ICRF scenarios in operated tokamaks and in the ITER . The theory solves the 3D Maxwell-Vlasov antenna-plasma-conducting shell boundary value problem in the non-orthogonal flux coordinates ({Psi}, {theta}, {var_phi}), {Psi} being magnetic flux function, {theta} and {var_phi} being the poloidal and toroidal angles, respectively. All basic physics, like wave refraction, reflection and diffraction are self consistently included, along with the fundamental ion and ion minority cyclotron resonances, two ion hybrid resonance, electron Landau and TTMP absorption. Antenna reactive impedance and loading resistance are also calculated and urgently needed for an antenna -generator matching. This is accomplished in a real confining magnetic field being varying in a plasma major radius direction, in toroidal and poloidal directions, through making use of the hot dense plasma wave induced currents with account to the finite Larmor radius effects. We expand the solution in Fourier series over the toroidal ({var_phi}) and poloidal ({theta}) angles and solve resulting ordinary differential equations in a radial like {Psi}-coordinate by finite difference method. The constructed discretization scheme is divergent-free one, thus retaining the basic properties of original equations. The Fourier expansion over the angle coordinates has given to us the possibility to correctly construct the ''parallel'' wave number k{sub //}, and thereby to correctly describe the ICRF waves absorption by a hot plasma. The toroidal harmonics are tightly coupled with each

  15. Efficient wedgelet pattern decision for depth modeling modes in three-dimensional high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-Bin; Fu, Chang-Hong; Chan, Yui-Lam; Tsang, Sik-Ho; Siu, Wan-Chi; Su, Wei-Min

    2016-05-01

    The three-dimensional (3-D) video extension of high-efficiency video coding is an emerging coding standard for multiple-view-plus-depth that allows view synthesis for multiple displays with depth information. In order to avoid mixing between the foreground and background, the depth discontinuities defined at the object boundary should be retained. To solve this issue, a depth intramode, i.e., the depth-modeling mode (DMM), is introduced in 3-D-high-efficiency video coding as an edge predictor. The test model HTM 8.1 includes DMM1 and DMM3. However, the mode-decision strategy of DMM increases the complexity drastically. Therefore, we propose a fast DMM1 decision algorithm that estimates sharp edges by a subregional search method. The optimal wedgelet pattern of DMM1 is then searched only in the most probable region. Additionally, another fast method is raised to skip DMM3 when mismatch occurs between the depth prediction unit (PU) and its colocated texture PU. Simulation results show that the proposed algorithm has slightly better performance in terms of complexity reduction compared with the wedgelet-pattern-reducing algorithm from the literature while better maintaining the coding performance. In addition, the proposed algorithm has a performance similar to that of an existing DMM-skipping algorithm. Moreover, it could be integrated with that category of algorithms for additional time savings.

  16. Parametric Analysis of a Turbine Trip Event in a BWR Using a 3D Nodal Code

    SciTech Connect

    Gorzel, A.

    2006-07-01

    Two essential thermal hydraulics safety criteria concerning the reactor core are that even during operational transients there is no fuel melting and not-permissible cladding temperatures are avoided. A common concept for boiling water reactors is to establish a minimum critical power ratio (MCPR) for steady state operation. For this MCPR it is shown that only a very small number of fuel rods suffers a short-term dryout during the transient. It is known from experience that the limiting transient for the determination of the MCPR is the turbine trip with blocked bypass system. This fast transient was simulated for a German BWR by use of the three-dimensional reactor analysis transient code SIMULATE-3K. The transient behaviour of the hot channels was used as input for the dryout calculation with the transient thermal hydraulics code FRANCESCA. By this way the maximum reduction of the CPR during the transient could be calculated. The fast increase in reactor power due to the pressure increase and to an increased core inlet flow is limited mainly by the Doppler effect, but automatically triggered operational measures also can contribute to the mitigation of the turbine trip. One very important method is the short-term fast reduction of the recirculation pump speed which is initiated e. g. by a pressure increase in front of the turbine. The large impacts of the starting time and of the rate of the pump speed reduction on the power progression and hence on the deterioration of CPR is presented. Another important procedure to limit the effects of the transient is the fast shutdown of the reactor that is caused when the reactor power reaches the limit value. It is shown that the SCRAM is not fast enough to reduce the first power maximum, but is able to prevent the appearance of a second - much smaller - maximum that would occur around one second after the first one in the absence of a SCRAM. (author)

  17. 3D relaxation MHD modeling with FOI-PERFECT code for electromagnetically driven HED systems

    NASA Astrophysics Data System (ADS)

    Wang, Ganghua; Duan, Shuchao; Xie, Weiping; Kan, Mingxian; Institute of Fluid Physics Collaboration

    2015-11-01

    One of the challenges in numerical simulations of electromagnetically driven high energy density (HED) systems is the existence of vacuum region. The electromagnetic part of the conventional model adopts the magnetic diffusion approximation (magnetic induction model). The vacuum region is approximated by artificially increasing the resistivity. On one hand the phase/group velocity is superluminal and hence non-physical in the vacuum region, on the other hand a diffusion equation with large diffusion coefficient can only be solved by implicit scheme. Implicit method is usually difficult to parallelize and converge. A better alternative is to solve the full electromagnetic equations for the electromagnetic part. Maxwell's equations coupled with the constitutive equation, generalized Ohm's law, constitute a relaxation model. The dispersion relation is given to show its transition from electromagnetic propagation in vacuum to resistive MHD in plasma in a natural way. The phase and group velocities are finite for this system. A better time stepping is adopted to give a 3rd full order convergence in time domain without the stiff relaxation term restriction. Therefore it is convenient for explicit & parallel computations. Some numerical results of FOI-PERFECT code are also given. Project supported by the National Natural Science Foundation of China (Grant No. 11172277,11205145).

  18. Dependent video coding using a tree representation of pixel dependencies

    NASA Astrophysics Data System (ADS)

    Amati, Luca; Valenzise, Giuseppe; Ortega, Antonio; Tubaro, Stefano

    2011-09-01

    Motion-compensated prediction induces a chain of coding dependencies between pixels in video. In principle, an optimal selection of encoding parameters (motion vectors, quantization parameters, coding modes) should take into account the whole temporal horizon of a GOP. However, in practical coding schemes, these choices are made on a frame-by-frame basis, thus with a possible loss of performance. In this paper we describe a tree-based model for pixelwise coding dependencies: each pixel in a frame is the child of a pixel in a previous reference frame. We show that some tree structures are more favorable than others from a rate-distortion perspective, e.g., because they entail a large descendance of pixels which are well predicted from a common ancestor. In those cases, a higher quality has to be assigned to pixels at the top of such trees. We promote the creation of these structures by adding a special discount term to the conventional Lagrangian cost adopted at the encoder. The proposed model can be implemented through a double-pass encoding procedure. Specifically, we devise heuristic cost functions to drive the selection of quantization parameters and of motion vectors, which can be readily implemented into a state-of-the-art H.264/AVC encoder. Our experiments demonstrate that coding efficiency is improved for video sequences with low motion, while there are no apparent gains for more complex motion. We argue that this is due to both the presence of complex encoder features not captured by the model, and to the complexity of the source to be encoded.

  19. Modern transform design for advanced image/video coding applications

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Topiwala, Pankaj N.

    2008-08-01

    This paper offers an overall review of recent advances in the design of modern transforms for image and video coding applications. Transforms have been an integral part of signal coding applications from the beginning, but emphasis had been on true floating-point transforms for most of that history. Recently, with the proliferation of low-power handheld multimedia devices, a new vision of integer-only transforms that provide high performance yet very low complexity has quickly gained ascendency. We explore two key design approaches to creating integer transforms, and focus on a systematic, universal method based on decomposition into lifting steps, and use of (dyadic) rational coefficients. This method provides a wealth of solutions, many of which are already in use in leading media codecs today, such as H.264, HD Photo/JPEG XR, and scalable audio. We give early indications in this paper, and more fully elsewhere.

  20. 2D and 3D stereoscopic videos used as pre-anatomy lab tools improve students' examination performance in a veterinary gross anatomy course.

    PubMed

    Al-Khalili, Sereen M; Coppoc, Gordon L

    2014-01-01

    The hypothesis for the research described in this article was that viewing an interactive two-dimensional (2D) or three-dimensional (3D) stereoscopic pre-laboratory video would improve efficiency and learning in the laboratory. A first-year DVM class was divided into 21 dissection teams of four students each. Primary variables were method of preparation (2D, 3D, or laboratory manual) and dissection region (thorax, abdomen, or pelvis). Teams were randomly assigned to a group (A, B, or C) in a crossover design experiment so that all students experienced each of the modes of preparation, but with different regions of the canine anatomy. All students were instructed to study normal course materials and the laboratory manual, the Guide, before coming to the laboratory session and to use them during the actual dissection as usual. Video groups were given a DVD with an interactive 10-12 minute video to view for the first 30 minutes of the laboratory session, while non-video groups were instructed to review the Guide. All groups were allowed 45 minutes to dissect the assigned section and find a list of assigned structures, after which all groups took a post-dissection quiz and attitudinal survey. The 2D groups performed better than the Guide groups (p=.028) on the post-dissection quiz, despite the fact that only a minority of the 2D-group students studied the Guide as instructed. There was no significant difference (p>.05) between 2D and 3D groups on the post-dissection quiz. Students preferred videos over the Guide. PMID:24418924

  1. 2D and 3D stereoscopic videos used as pre-anatomy lab tools improve students' examination performance in a veterinary gross anatomy course.

    PubMed

    Al-Khalili, Sereen M; Coppoc, Gordon L

    2014-01-01

    The hypothesis for the research described in this article was that viewing an interactive two-dimensional (2D) or three-dimensional (3D) stereoscopic pre-laboratory video would improve efficiency and learning in the laboratory. A first-year DVM class was divided into 21 dissection teams of four students each. Primary variables were method of preparation (2D, 3D, or laboratory manual) and dissection region (thorax, abdomen, or pelvis). Teams were randomly assigned to a group (A, B, or C) in a crossover design experiment so that all students experienced each of the modes of preparation, but with different regions of the canine anatomy. All students were instructed to study normal course materials and the laboratory manual, the Guide, before coming to the laboratory session and to use them during the actual dissection as usual. Video groups were given a DVD with an interactive 10-12 minute video to view for the first 30 minutes of the laboratory session, while non-video groups were instructed to review the Guide. All groups were allowed 45 minutes to dissect the assigned section and find a list of assigned structures, after which all groups took a post-dissection quiz and attitudinal survey. The 2D groups performed better than the Guide groups (p=.028) on the post-dissection quiz, despite the fact that only a minority of the 2D-group students studied the Guide as instructed. There was no significant difference (p>.05) between 2D and 3D groups on the post-dissection quiz. Students preferred videos over the Guide.

  2. A Fast Parallel Simulation Code for Interaction between Proto-Planetary Disk and Embedded Proto-Planets: Implementation for 3D Code

    SciTech Connect

    Li, Shengtai; Li, Hui

    2012-06-14

    We develop a 3D simulation code for interaction between the proto-planetary disk and embedded proto-planets. The protoplanetary disk is treated as a three-dimensional (3D), self-gravitating gas whose motion is described by the locally isothermal Navier-Stokes equations in a spherical coordinate centered on the star. The differential equations for the disk are similar to those given in Kley et al. (2009) with a different gravitational potential that is defined in Nelson et al. (2000). The equations are solved by directional split Godunov method for the inviscid Euler equations plus operator-split method for the viscous source terms. We use a sub-cycling technique for the azimuthal sweep to alleviate the time step restriction. We also extend the FARGO scheme of Masset (2000) and modified in Li et al. (2001) to our 3D code to accelerate the transport in the azimuthal direction. Furthermore, we have implemented a reduced 2D (r, {theta}) and a fully 3D self-gravity solver on our uniform disk grid, which extends our 2D method (Li, Buoni, & Li 2008) to 3D. This solver uses a mode cut-off strategy and combines FFT in the azimuthal direction and direct summation in the radial and meridional direction. An initial axis-symmetric equilibrium disk is generated via iteration between the disk density profile and the 2D disk-self-gravity. We do not need any softening in the disk self-gravity calculation as we have used a shifted grid method (Li et al. 2008) to calculate the potential. The motion of the planet is limited on the mid-plane and the equations are the same as given in D'Angelo et al. (2005), which we adapted to the polar coordinates with a fourth-order Runge-Kutta solver. The disk gravitational force on the planet is assumed to evolve linearly with time between two hydrodynamics time steps. The Planetary potential acting on the disk is calculated accurately with a small softening given by a cubic-spline form (Kley et al. 2009). Since the torque is extremely sensitive to

  3. Dynamic 3D shape of the plantar surface of the foot using coded structured light: a technical report

    PubMed Central

    2014-01-01

    Background The foot provides a crucial contribution to the balance and stability of the musculoskeletal system, and accurate foot measurements are important in applications such as designing custom insoles/footwear. With better understanding of the dynamic behavior of the foot, dynamic foot reconstruction techniques are surfacing as useful ways to properly measure the shape of the foot. This paper presents a novel design and implementation of a structured-light prototype system providing dense three dimensional (3D) measurements of the foot in motion. The input to the system is a video sequence of a foot during a single step; the output is a 3D reconstruction of the plantar surface of the foot for each frame of the input. Methods Engineering and clinical tests were carried out to test the accuracy and repeatability of the system. Accuracy experiments involved imaging a planar surface from different orientations and elevations and measuring the fitting errors of the data to a plane. Repeatability experiments were done using reconstructions from 27 different subjects, where for each one both right and left feet were reconstructed in static and dynamic conditions over two different days. Results The static accuracy of the system was found to be 0.3 mm with planar test objects. In tests with real feet, the system proved repeatable, with reconstruction differences between trials one week apart averaging 2.4 mm (static case) and 2.8 mm (dynamic case). Conclusion The results obtained in the experiments show positive accuracy and repeatability results when compared to current literature. The design also shows to be superior to the systems available in the literature in several factors. Further studies need to be done to quantify the reliability of the system in clinical environments. PMID:24456711

  4. Particle entry through "Sash" groove simulated by Global 3D Electromagnetic Particle code with duskward IMF By

    NASA Astrophysics Data System (ADS)

    Yan, X.; Cai, D.; Nishikawa, K.; Lembege, B.

    2004-12-01

    We made our efforts to parallelize the global 3D HPF Electromagnetic particle model (EMPM) for several years and have also reported our meaningful simulation results that revealed the essential physics involved in interaction of the solar wind with the Earth's magnetosphere using this EMPM (Nishikawa et al., 1995; Nishikawa, 1997, 1998a, b, 2001, 2002) in our PC cluster and supercomputer(D.S. Cai et al., 2001, 2003). Sash patterns and related phenomena have been observed and reported in some satellite observations (Fujumoto et al. 1997; Maynard, 2001), and have motivated 3D MHD simulations (White and al., 1998). We also investigated it with our global 3D parallelized HPF EMPM with dawnward IMF By (K.-I. Nishikawa, 1998) and recently new simulation with dusk-ward IMF By was accomplished in the new VPP5000 supercomputer. In the new simulations performed on the new VPP5000 supercomputer of Tsukuba University, we used larger domain size, 305×205×205, smaller grid size (Δ ), 0.5R E(the radium of the Earth), more total particle number, 220,000,000 (about 8 pairs per cell). At first, we run this code until we get the so-called quasi-stationary status; After the quasi-stationary status was established, we applied a northward IMF (B z=0.2), and then wait until the IMF arrives around the magnetopuase. After the arrival of IMF, we begin to change the IMF from northward to duskward (IMF B y=-0.2). The results revealed that the groove structure at the day-side magnetopause, that causes particle entry into inner magnetosphere and the cross structure or S-structure at near magneto-tail are formed. Moreover, in contrast with MHD simulations, kinetic characteristic of this event is also analyzed self-consistently with this simulation. The new simulation provides new and more detailed insights for the observed sash event.

  5. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  6. An adaptive motion estimation scheme for video coding.

    PubMed

    Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  7. New aspects of whistler waves driven by an electron beam studied by a 3-D electromagnetic code

    NASA Technical Reports Server (NTRS)

    Nishikawa, Ken-Ichi; Buneman, Oscar; Neubert, Torsten

    1994-01-01

    We have restudied electron beam driven whistler waves with a 3-D electromagnetic particle code. The simulation results show electromagnetic whistler wave emissions and electrostatic beam modes like those observed in the Spacelab 2 electron beam experiment. It has been suggested in the past that the spatial bunching of beam electrons associated with the beam mode may directly generate whistler waves. However, the simulation results indicate several inconsistencies with this picture: (1) whistler waves continue to be generated even after the beam mode space charge modulation looses its coherence, (2) the parallel (to the background magnetic field) wavelength of the whistler wave is longer than that of the beam instability, and (3) the parallel phase velocity of the whistler wave is smaller than that of the beam mode. The complex structure of the whistler waves in the vicinity of the beam suggest that the transverse motion (gyration) of the beam and background electrons is also involved in the generation of whistler waves.

  8. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  9. A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code

    NASA Astrophysics Data System (ADS)

    Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.

    2013-12-01

    The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness, , and standard deviation of COT, were 3.0 and 4.3 for pristine case and 8.5 and 7.4 for polluted case, respectively. In the MIDPM method, we first construct a library of pair of observed vertical profiles from active sensors and collocated imager products at the nadir footprint, i.e. spectral imager radiances, cloud optical thickness (COT), effective particle radius (RE) and cloud top temperature (Tc). We then select a

  10. Improved video coding efficiency exploiting tree-based pixelwise coding dependencies

    NASA Astrophysics Data System (ADS)

    Valenzise, Giuseppe; Ortega, Antonio

    2010-01-01

    In a conventional hybrid video coding scheme, the choice of encoding parameters (motion vectors, quantization parameters, etc.) is carried out by optimizing frame by frame the output distortion for a given rate budget. While it is well known that motion estimation naturally induces a chain of dependencies among pixels, this is usually not explicitly exploited in the coding process in order to improve overall coding efficiency. Specifically, when considering a group of pictures with an IPPP... structure, each pixel of the first frame can be thought of as the root of a tree whose children are the pixels of the subsequent frames predicted by it. In this work, we demonstrate the advantages of such a representation by showing that, in some situations, the best motion vector is not the one that minimizes the energy of the prediction residual, but the one that produces a better tree structure, e.g., one that can be globally more favorable from a rate-distortion perspective. In this new structure, pixel with a larger descendance are allocated extra rate to produce higher quality predictors. As a proof of concept, we verify this assertion by assigning the quantization parameter in a video sequence in such a way that pixels with a larger number of descendants are coded with a higher quality. In this way we are able to improve RD performance by nearly 1 dB. Our preliminary results suggest that a deeper understanding of the temporal dependencies can potentially lead to substantial gains in coding performance.

  11. Rn3D: A finite element code for simulating gas flow and radon transport in variably saturated, nonisothermal porous media. User`s manual, Version 1.0

    SciTech Connect

    Holford, D.J.

    1994-01-01

    This document is a user`s manual for the Rn3D finite element code. Rn3D was developed to simulate gas flow and radon transport in variably saturated, nonisothermal porous media. The Rn3D model is applicable to a wide range of problems involving radon transport in soil because it can simulate either steady-state or transient flow and transport in one-, two- or three-dimensions (including radially symmetric two-dimensional problems). The porous materials may be heterogeneous and anisotropic. This manual describes all pertinent mathematics related to the governing, boundary, and constitutive equations of the model, as well as the development of the finite element equations used in the code. Instructions are given for constructing Rn3D input files and executing the code, as well as a description of all output files generated by the code. Five verification problems are given that test various aspects of code operation, complete with example input files, FORTRAN programs for the respective analytical solutions, and plots of model results. An example simulation is presented to illustrate the type of problem Rn3D is designed to solve. Finally, instructions are given on how to convert Rn3D to simulate systems other than radon, air, and water.

  12. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Lettry, J.; Minea, T.; Lifschitz, A. F.; Schmitzer, C.; Midttun, O.; Steyaert, D.

    2013-02-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons' temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contribution focuses on the modeling of two different extractors (IS01, IS02) of the Linac4 ion sources. The most efficient extraction system is analyzed via numerical parametric studies. The influence of aperture's geometry and the strength of the magnetic filter field on the extracted electron and NI current will be discussed. The NI production of sources based on volume extraction and cesiated surface are also compared.

  13. Analyzing Structure and Function of Vascularization in Engineered Bone Tissue by Video-Rate Intravital Microscopy and 3D Image Processing.

    PubMed

    Pang, Yonggang; Tsigkou, Olga; Spencer, Joel A; Lin, Charles P; Neville, Craig; Grottkau, Brian

    2015-10-01

    Vascularization is a key challenge in tissue engineering. Three-dimensional structure and microcirculation are two fundamental parameters for evaluating vascularization. Microscopic techniques with cellular level resolution, fast continuous observation, and robust 3D postimage processing are essential for evaluation, but have not been applied previously because of technical difficulties. In this study, we report novel video-rate confocal microscopy and 3D postimage processing techniques to accomplish this goal. In an immune-deficient mouse model, vascularized bone tissue was successfully engineered using human bone marrow mesenchymal stem cells (hMSCs) and human umbilical vein endothelial cells (HUVECs) in a poly (D,L-lactide-co-glycolide) (PLGA) scaffold. Video-rate (30 FPS) intravital confocal microscopy was applied in vitro and in vivo to visualize the vascular structure in the engineered bone and the microcirculation of the blood cells. Postimage processing was applied to perform 3D image reconstruction, by analyzing microvascular networks and calculating blood cell viscosity. The 3D volume reconstructed images show that the hMSCs served as pericytes stabilizing the microvascular network formed by HUVECs. Using orthogonal imaging reconstruction and transparency adjustment, both the vessel structure and blood cells within the vessel lumen were visualized. Network length, network intersections, and intersection densities were successfully computed using our custom-developed software. Viscosity analysis of the blood cells provided functional evaluation of the microcirculation. These results show that by 8 weeks, the blood vessels in peripheral areas function quite similarly to the host vessels. However, the viscosity drops about fourfold where it is only 0.8 mm away from the host. In summary, we developed novel techniques combining intravital microscopy and 3D image processing to analyze the vascularization in engineered bone. These techniques have broad

  14. Rate distortion analysis for spatially scalable video coding.

    PubMed

    Zhang, Rong; Comer, Mary L

    2010-11-01

    In this paper, we derive the rate distortion lower bounds of spatially scalable video coding techniques. The methods we evaluate are subband and pyramid motion compensation where temporal redundancies in the same spatial layer as well as interlayer spatial redundancies are exploited in the enhancement layer encoding. The rate distortion bounds are derived from rate distortion theory for stationary Gaussian signals where mean square error is used as the distortion criteria. Assuming that the base layer is encoded by a non-scalable video coder, we derive the rate distortion functions for the enhancement layer, which depend on the power spectral density of the input signal, the motion prediction error probability density function and the base layer encoding performance. We will show that pyramid and subband methods are expected to outperform independently encoding the enhancement layer using motion-compensated prediction, in terms of rate distortion efficiency, when the base layer is encoded at a relatively higher quality or less accurate displacement estimation happens in the enhancement layer. PMID:20519155

  15. Fast mode decision algorithm for scalable video coding based on luminance coded block pattern

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Jung; Yoo, Jeong-Ju; Hong, Jin-Woo; Suh, Jae-Won

    2013-01-01

    A fast mode decision algorithm is proposed to reduce the computation complexity of adaptive inter layer prediction method, which is a motion estimation algorithm for video compression in scalable video coding (SVC) encoder systems. SVC is standard as an extension of H.264/AVC to provide multimedia services within variable transport environments and across various terminal systems. SVC supports an adaptive inter mode prediction, which includes not only the temporal prediction modes with varying block sizes but also inter layer prediction modes based on correlation between the lower layer information and the current layer. To achieve high coding efficiency, a rate distortion optimization technique is employed to select the best coding mode and reference frame for each MB. As a result, the performance gains of SVC come with increased computational complexity. To overcome this problem, we propose fast mode decision based on coded block pattern (CBP) of 16×16 mode and reference block of best CBP. The experimental results in SVC with combined scalability structure show that the proposed algorithm achieves up to an average 61.65% speed up factor in the encoding time with a negligible bit increment and a minimal image quality loss. In addition, experimental results in spatial and quality scalability show that the computational complexity has been reduced about 55.32% and 52.69%, respectively.

  16. Development and application of a ray-tracing code integrating with 3D equilibrium mapping in LHD ECH experiments

    NASA Astrophysics Data System (ADS)

    Tsujimura, T., Ii; Kubo, S.; Takahashi, H.; Makino, R.; Seki, R.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Ida, K.; Suzuki, C.; Emoto, M.; Yokoyama, M.; Kobayashi, T.; Moon, C.; Nagaoka, K.; Osakabe, M.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Okada, K.; Ejiri, A.; Mutoh, T.

    2015-11-01

    The central electron temperature has successfully reached up to 7.5 keV in large helical device (LHD) plasmas with a central high-ion temperature of 5 keV and a central electron density of 1.3× {{10}19} m-3. This result was obtained by heating with a newly-installed 154 GHz gyrotron and also the optimisation of injection geometry in electron cyclotron heating (ECH). The optimisation was carried out by using the ray-tracing code ‘LHDGauss’, which was upgraded to include the rapid post-processing three-dimensional (3D) equilibrium mapping obtained from experiments. For ray-tracing calculations, LHDGauss can automatically read the relevant data registered in the LHD database after a discharge, such as ECH injection settings (e.g. Gaussian beam parameters, target positions, polarisation and ECH power) and Thomson scattering diagnostic data along with the 3D equilibrium mapping data. The equilibrium map of the electron density and temperature profiles are then extrapolated into the region outside the last closed flux surface. Mode purity, or the ratio between the ordinary mode and the extraordinary mode, is obtained by calculating the 1D full-wave equation along the direction of the rays from the antenna to the absorption target point. Using the virtual magnetic flux surfaces, the effects of the modelled density profiles and the magnetic shear at the peripheral region with a given polarisation are taken into account. Power deposition profiles calculated for each Thomson scattering measurement timing are registered in the LHD database. The adjustment of the injection settings for the desired deposition profile from the feedback provided on a shot-by-shot basis resulted in an effective experimental procedure.

  17. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  18. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  19. Verification of the NIKE3D structural analysis code by comparison against the analytic solution for a spherical cavity under a far-field uniaxial stress

    SciTech Connect

    Kansa, E.J.

    1989-01-01

    The original scope of this task was to simulate the stresses and displacements of a hard rock tunnel experimental design using a suitable three-dimensional finite element code. NIKE3D was selected as a suitable code for performing these primarily approximate linearly elastic 3D analyses, but it required modifications to include initial stress, shear traction boundary condition and excavation options. During the summer of 1988, such capabilities were installed in a special version of NIKE3D. Subsequently, we verified both the LLNL's commonly used version of NIKE3D and our private modified version against the analytic solution for a spherical cavity in an elastic material deforming under a far-field uniaxial stress. We find the results produced by the unmodified and modified versions of NIKE3D to be in good agreement with the analytic solutions, except near the cavity, where the errors in the stress field are large. As can be expected from a code based on a displacement finite element formulation, the displacements are much more accurate than the stresses calculated from the 8-noded brick elements. To reduce these errors to acceptable levels, the grid must be refined further near the cavity wall. The level of grid refinement required to simulate accurately tunneling problems that do not have spatial symmetry in three dimensions using the current NIKE3D code is likely to exceed the memory capacity of the largest CRAY 1 computers at LLNL. 8 refs., 121 figs.

  20. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  1. Spatio-temporal correlation-based fast coding unit depth decision for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Chengtao; Zhou, Fan; Chen, Yaowu

    2013-10-01

    The exhaustive block partition search process in high efficiency video coding (HEVC) imposes a very high computational complexity on test module of HEVC encoder (HM). A fast coding unit (CU) depth algorithm using the spatio-temporal correlation of the depth information to fasten the search process is proposed. The depth of the coding tree unit (CTU) is predicted first by using the depth information of the spatio-temporal neighbor CTUs. Then, the depth information of the adjacent CU is incorporated to skip some specific depths when encoding the sub-CTU. As compared with the original HM encoder, experimental results show that the proposed algorithm can save more than 20% encoding time on average for intra-only, low-delay, low-delay P slices, and random access cases with almost the same rate-distortion performance.

  2. Low Complexity Mode Decision for 3D-HEVC

    PubMed Central

    Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  3. Low complexity mode decision for 3D-HEVC.

    PubMed

    Zhang, Qiuwen; Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  4. Comparison of the LLNL ALE3D and AKTS Thermal Safety Computer Codes for Calculating Times to Explosion in ODTX and STEX Thermal Cookoff Experiments

    SciTech Connect

    Wemhoff, A P; Burnham, A K

    2006-04-05

    Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTS code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.

  5. Particle entry through sash in the magnetopause with a dawndard IMF as simulated by a 3-D EM particle code

    NASA Astrophysics Data System (ADS)

    Cai, D.; Yan, X.; Lembege, B.; Nishikawa, K.

    2003-12-01

    We report a new progress in the long-term effort to represent the global interaction of the solar wind with the Earth's magnetosphere using a three-dimensional electromagnetic particle code with the improved resolutions using the HPF Tristan code. After a quasi-steady state is established with an unmagnetized solar wind we gradually switch on a northward interplanetary magnetic field (IMF), which causes a magnetic reconnection at the nightside cusps and the magnetosphere to be depolarized. In the case that the northward IMF is switched gradually to dawnward, there is no signature of reconnection in the near-Earth magnetotail as in the case with the southward turning. On the contrary analysis of magnetic fields in the magnetopause confirms a signature of magnetic reconnection at both the dawnside and duskside. And the plasma sheet in the near-Earth magnetotail clearly thins as in the case of southward turning. Arrival of dawnward IMF to the magnetopause creates a reconnection groove which cause particle entry into the deep region of the magnetosphere via field lines that go near the magnetopause. This deep connection is more fully recognized tailward of Earth. The flank weak-field fan joins onto the plasma sheet and the current sheet to form a geometrical feature called the cross-tail S that structurally integrates the magnetopause and the tail interior. This structure contributes to direct plasma entry between the magnetosheath to the inner magnetosphere and plasma sheet, in which the entry process heats the magnetosheath plasma to plasma sheet temperatures. These phenomena have been found by Cluster observations. Further investigation with Cluster observations will provide new insights for unsolved problems such as hot flow anomalies (HFAs), substorms, and storm-substorm relationship. 3-D movies with sash structure will be presented at the meeting.

  6. Recent Improvement of Measurement Instrumentation to Supervise Nuclear Operations and to Contribute Input Data to 3D Simulation Code - 13289

    SciTech Connect

    Mahe, Charly; Chabal, Caroline

    2013-07-01

    The CEA has developed many compact characterization tools to follow sensitive operations in a nuclear environment. Usually, these devices are made to carry out radiological inventories, to prepare nuclear interventions or to supervise some special operations. These in situ measurement techniques mainly take place at different stages of clean-up operations and decommissioning projects, but they are also in use to supervise sensitive operations when the nuclear plant is still operating. In addition to this, such tools are often associated with robots to access very highly radioactive areas, and thus can be used in accident situations. Last but not least, the radiological data collected can be entered in 3D calculation codes used to simulate the doses absorbed by workers in real time during operations in a nuclear environment. Faced with these ever-greater needs, nuclear measurement instrumentation always has to involve on-going improvement processes. Firstly, this paper will describe the latest developments and results obtained in both gamma and alpha imaging techniques. The gamma camera has been used by the CEA since the 1990's and several changes have made this device more sensitive, more compact and more competitive for nuclear plant operations. It is used to quickly identify hot spots, locating irradiating sources from 50 keV to 1500 keV. Several examples from a wide field of applications will be presented, together with the very latest developments. The alpha camera is a new camera used to see invisible alpha contamination on several kinds of surfaces. The latest results obtained allow real time supervision of a glove box cleaning operation (for {sup 241}Am contamination). The detection principle as well as the main trials and results obtained will be presented. Secondly, this paper will focus on in situ gamma spectrometry methods developed by the CEA with compact gamma spectrometry probes (CdZnTe, LaBr{sub 3}, NaI, etc.). The radiological data collected is used

  7. The Intercomparison of 3D Radiation Codes (I3RC): Showcasing Mathematical and Computational Physics in a Critical Atmospheric Application

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Cahalan, R. F.

    2001-05-01

    The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards

  8. Application of the Finite Orbit Width Version of the CQL3D Code to NBI +RF Heating of NSTX Plasma

    NASA Astrophysics Data System (ADS)

    Petrov, Yu. V.; Harvey, R. W.

    2015-11-01

    The CQL3D bounce-averaged Fokker-Planck (FP) code has been upgraded to include Finite-Orbit-Width (FOW) effects. The calculations can be done either with a fast Hybrid-FOW option or with a slower but neoclassically complete full-FOW option. The banana regime neoclassical radial transport appears naturally in the full-FOW version by averaging the local collision coefficients along guiding center orbits, with a proper transformation matrix from local (R, Z) coordinates to the midplane computational coordinates, where the FP equation is solved. In a similar way, the local quasilinear rf diffusion terms give rise to additional radial transport of orbits. The full-FOW version is applied to simulation of ion heating in NSTX plasma. It is demonstrated that it can describe the physics of transport phenomena in plasma with auxiliary heating, in particular, the enhancement of the radial transport of ions by RF heating and the occurrence of the bootstrap current. Because of the bounce-averaging on the FPE, the results are obtained in a relatively short computational time. A typical full-FOW run time is 30 min using 140 MPI cores. Due to an implicit solver, calculations with a large time step (tested up to dt = 0.5 sec) remain stable. Supported by USDOE grants SC0006614, ER54744, and ER44649.

  9. DYNA3D: A nonlinear, explicit, three-dimensional finite element code for solid and structural mechanics, User manual. Revision 1

    SciTech Connect

    Whirley, R.G.; Engelmann, B.E.

    1993-11-01

    This report is the User Manual for the 1993 version of DYNA3D, and also serves as a User Guide. DYNA3D is a nonlinear, explicit, finite element code for analyzing the transient dynamic response of three-dimensional solids and structures. The code is fully vectorized and is available on several computer platforms. DYNA3D includes solid, shell, beam, and truss elements to allow maximum flexibility in modeling physical problems. Many material models are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects, and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding and single surface contact. Rigid materials provide added modeling flexibility. A material model driver with interactive graphics display is incorporated into DYNA3D to permit accurate modeling of complex material response based on experimental data. Along with the DYNA3D Example Problem Manual, this document provides the information necessary to apply DYNA3D to solve a wide range of engineering analysis problems.

  10. 3D-Reconstruction of recent volcanic activity from ROV-video, Charles Darwin Seamounts, Cape Verdes

    NASA Astrophysics Data System (ADS)

    Kwasnitschka, T.; Hansteen, T. H.; Kutterolf, S.; Freundt, A.; Devey, C. W.

    2011-12-01

    As well as providing well-localized samples, Remotely Operated Vehicles (ROVs) produce huge quantities of visual data whose potential for geological data mining has seldom if ever been fully realized. We present a new workflow to derive essential results of field geology such as quantitative stratigraphy and tectonic surveying from ROV-based photo and video material. We demonstrate the procedure on the Charles Darwin Seamounts, a field of small hot spot volcanoes recently identified at a depth of ca. 3500m southwest of the island of Santo Antao in the Cape Verdes. The Charles Darwin Seamounts feature a wide spectrum of volcanic edifices with forms suggestive of scoria cones, lava domes, tuff rings and maar-type depressions, all of comparable dimensions. These forms, coupled with the highly fragmented volcaniclastic samples recovered by dredging, motivated surveying parts of some edifices down to centimeter scale. ROV-based surveys yielded volcaniclastic samples of key structures linked by extensive coverage of stereoscopic photographs and high-resolution video. Based upon the latter, we present our workflow to derive three-dimensional models of outcrops from a single-camera video sequence, allowing quantitative measurements of fault orientation, bedding structure, grain size distribution and photo mosaicking within a geo-referenced framework. With this information we can identify episodes of repetitive eruptive activity at individual volcanic centers and see changes in eruptive style over time, which, despite their proximity to each other, is highly variable.

  11. The MiRa/THESIS3D-code package for resonator design and modeling of millimeter-wave material processing

    SciTech Connect

    Feher, L.; Link, G.; Thumm, M.

    1996-12-31

    Precise knowledge of millimeter-wave oven properties and design studies have to be obtained by 3D numerical field calculations. A simulation code solving the electromagnetic field problem based on a covariant raytracing scheme (MiRa-Code) has been developed. Time dependent electromagnetic field-material interactions during sintering as well as the heat transfer processes within the samples has been investigated. A numerical code solving the nonlinear heat transfer problem due to millimeter-wave heating has been developed (THESIS3D-Code). For a self consistent sintering simulation, a zip interface between both codes exchanging the time advancing fields and material parameters is implemented. Recent results and progress on calculations of field distributions in large overmoded resonators as well as results on modeling heating of materials with millimeter waves are presented in this paper. The calculations are compared to experiments.

  12. TFaNS Tone Fan Noise Design/Prediction System. Volume 1; System Description, CUP3D Technical Documentation and Manual for Code Developers

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.

  13. Design and implementation of H.264 based embedded video coding technology

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  14. Test Problems for Reactive Flow HE Model in the ALE3D Code and Limited Sensitivity Study

    SciTech Connect

    Gerassimenko, M.

    2000-03-01

    We document quick running test problems for a reactive flow model of HE initiation incorporated into ALE3D. A quarter percent change in projectile velocity changes the outcome from detonation to HE burn that dies down. We study the sensitivity of calculated HE behavior to several parameters of practical interest where modeling HE initiation with ALE3D.

  15. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  16. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  17. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents.

    PubMed

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C M E; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11-15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the "at-risk" cut-off on the Spence Children Anxiety Survey were eligible. Adolescents' anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents' anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants' expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292

  18. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents.

    PubMed

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C M E; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11-15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the "at-risk" cut-off on the Spence Children Anxiety Survey were eligible. Adolescents' anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents' anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants' expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues.

  19. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents

    PubMed Central

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C. M. E.; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11–15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the “at-risk” cut-off on the Spence Children Anxiety Survey were eligible. Adolescents’ anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents’ anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants’ expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292

  20. A multiblock/multizone code (PAB 3D-v2) for the three-dimensional Navier-Stokes equations: Preliminary applications

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    1990-01-01

    The development and applications of multiblock/multizone and adaptive grid methodologies for solving the three-dimensional simplified Navier-Stokes equations are described. Adaptive grid and multiblock/multizone approaches are introduced and applied to external and internal flow problems. These new implementations increase the capabilities and flexibility of the PAB3D code in solving flow problems associated with complex geometry.

  1. Numerical model of water flow and solute accumulation in vertisols using HYDRUS 2D/3D code

    NASA Astrophysics Data System (ADS)

    Weiss, Tomáš; Dahan, Ofer; Turkeltub, Tuvia

    2015-04-01

    boundary to the wall of the crack (so that the solute can accumulate due to evaporation on the crack block wall, and infiltrating fresh water can push the solute further down) - in order to do so, HYDRUS 2D/3D code had to be modified by its developers. Unconventionally, the main fitting parameters were: parameter a and n in the soil water retention curve and saturated hydraulic conductivity. The amount of infiltrated water (within a reasonable range), the infiltration function in the crack and the actual evaporation from the crack were also used as secondary fitting parameters. The model supports the previous findings that significant amount (~90%) of water from rain events must infiltrate through the crack. It was also noted that infiltration from the crack has to be increasing with depth and that the highest infiltration rate should be somewhere between 1-3m. This paper suggests a new way how to model vertisols in semi-arid regions. It also supports the previous findings about vertisols: especially, the utmost importance of soil cracks as preferential pathways for water and contaminants and soil cracks as deep evaporators.

  2. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  3. Chroma sampling and modulation techniques in high dynamic range video coding

    NASA Astrophysics Data System (ADS)

    Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj

    2015-09-01

    High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.

  4. NIKE3D: an implicit, finite-deformation, finite element code for analyzing the static and dynamic response of three-dimensional solids

    SciTech Connect

    Hallquist, J.O.

    1981-01-01

    A user's manual is provided for NIKE3D, a fully implicit three-dimensional finite element code for analyzing the large deformation static and dynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 8-node constant pressure solid elements. Bandwidth minimization is optional. Post-processors for NIKE3D include GRAPE for plotting deformed shapes and stress contours and DYNAP for plotting time histories.

  5. Joint source coding, transport processing, and error concealment for H.323-based packet video

    NASA Astrophysics Data System (ADS)

    Zhu, Qin-Fan; Kerofsky, Louis

    1998-12-01

    In this paper, we investigate how to adapt different parameters in H.263 source coding, transport processing and error concealment to optimize end-to-end video quality at different bitrates and packet loss rates for H.323-based packet video. First different intra coding patterns are compared and we show that the contiguous rectangle or square block pattern offers the best performance in terms of video quality in the presence of packet loss. Second, the optimal intra coding frequency is found for different bitrates and packet loss rates. The optimal number of GOB headers to be inserted in the source coding is then determined. The effect of transport processing strategies such as packetization and retransmission is also examined. For packetization, the impact of packet size and the effect of macroblock segmentation to picture quality are investigated. Finally, we show that the dejitter buffering delay can be used to the advantage for packet loss recovery with video retransmission without incurring any extra delay.

  6. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  7. Texture video-assisted motion vector predictor for depth map coding

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoxian; Chang, Yilin; Li, Zhibin; Huo, Junyan

    2011-08-01

    A texture video-assisted motion vector predictor for depth map coding is proposed in this letter. Based on the analyses of motion similarity between texture videos and their corresponding depth maps, the proposed approach uses the motion vectors of texture videos and the median predictor jointly to determine the optimal predicted motion vector for depth map coding by employing a rate-distortion (R-D) criterion. Experimental results demonstrate that compared with the median predictor utilized in H.264/AVC, the proposed method can save the maximum and average bit rate as high as 4.89% and 3.68%, respectively, while guaranteeing the quality of synthesized virtual views.

  8. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos.

  9. Sliding-window raptor codes for efficient scalable wireless video broadcasting with unequal loss protection.

    PubMed

    Cataldi, Pasquale; Grangetto, Marco; Tillo, Tammam; Magli, Enrico; Olmo, Gabriella

    2010-06-01

    Digital fountain codes have emerged as a low-complexity alternative to Reed-Solomon codes for erasure correction. The applications of these codes are relevant especially in the field of wireless video, where low encoding and decoding complexity is crucial. In this paper, we introduce a new class of digital fountain codes based on a sliding-window approach applied to Raptor codes. These codes have several properties useful for video applications, and provide better performance than classical digital fountains. Then, we propose an application of sliding-window Raptor codes to wireless video broadcasting using scalable video coding. The rates of the base and enhancement layers, as well as the number of coded packets generated for each layer, are optimized so as to yield the best possible expected quality at the receiver side, and providing unequal loss protection to the different layers according to their importance. The proposed system has been validated in a UMTS broadcast scenario, showing that it improves the end-to-end quality, and is robust towards fluctuations in the packet loss rate.

  10. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  11. Impact Analysis of Baseband Quantizer on Coding Efficiency for HDR Video

    NASA Astrophysics Data System (ADS)

    Wong, Chau-Wai; Su, Guan-Ming; Wu, Min

    2016-10-01

    Digitally acquired high dynamic range (HDR) video baseband signal can take 10 to 12 bits per color channel. It is economically important to be able to reuse the legacy 8 or 10-bit video codecs to efficiently compress the HDR video. Linear or nonlinear mapping on the intensity can be applied to the baseband signal to reduce the dynamic range before the signal is sent to the codec, and we refer to this range reduction step as a baseband quantization. We show analytically and verify using test sequences that the use of the baseband quantizer lowers the coding efficiency. Experiments show that as the baseband quantizer is strengthened by 1.6 bits, the drop of PSNR at a high bitrate is up to 1.60dB. Our result suggests that in order to achieve high coding efficiency, information reduction of videos in terms of quantization error should be introduced in the video codec instead of on the baseband signal.

  12. Porting the 3D Gyrokinetic Particle-in-cell Code GTC to the CRAY/NEC SX-6 Vector Architecture: Perspectives and Challenges

    SciTech Connect

    S. Ethier; Z. Lin

    2003-09-15

    Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines.

  13. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  14. A multi-grid code for 3-D transonic potential flow about axisymmetric inlets at angle of attack

    NASA Technical Reports Server (NTRS)

    Mccarthy, D. R.; Reyhner, T. A.

    1980-01-01

    In the present work, an existing transonic potential code is adapted to utilize the Multiple Level Adaptive technique proposed by A. Brandt. It is shown that order of magnitude improvements in speed and greatly improved accuracy over the unmodified code are achieved. Consideration is given to the difficulties of multi-grid programming, and possible future applications are surveyed.

  15. Real-time transmission of digital video using variable-length coding

    NASA Astrophysics Data System (ADS)

    Bizon, Thomas P.; Shalkhauser, Mary Jo; Whyte, Wayne A., Jr.

    1993-03-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  16. VTLOGANL: A Computer Program for Coding and Analyzing Data Gathered on Video Tape.

    ERIC Educational Resources Information Center

    Hecht, Jeffrey B.; And Others

    To code and analyze research data on videotape, a methodology is needed that allows the researcher to code directly and then analyze the observed degree of intensity of the observed events. The establishment of such a methodology is the next logical step in the development of the use of video recorded data in research. The Technological…

  17. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  18. Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates

    NASA Technical Reports Server (NTRS)

    Deane, Anil E.

    1996-01-01

    Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.

  19. Layered Wyner-Ziv video coding for transmission over unreliable channels

    NASA Astrophysics Data System (ADS)

    Xu, Qian; Stankovic, Vladimir; Xiong, Zixiang

    2005-07-01

    Based on recent works on Wyner-Ziv coding (or lossy source coding with decoder side information), we consider the case with noisy channel and addresses distributed joint source-channel coding, while targeting at the impor- tant application of scalable video transmission over wireless networks. In Wyner-Ziv coding, after quantization, Slepian-Wolf coding (SWC) is used to reduce the rate. SWC is traditionally realized by sending syndromes of a linear channel code. Since syndromes of the channel code can only compress but cannot protect, for transmission over noisy channels, additional error protection is needed. However, instead of using one channel code for SWC and one for error protection, our idea is to use a single channel code to achieve both compression and protection. We replace the traditional syndrome-based SWC scheme by the parity-based one, where only parity bits of the Slepian-Wolf channel code are sent. If the amount of transmitted parity bits increases above the Slepian-Wolf limit, the added redundancy is exploited to cope against the noise in the transmission channel. Using IRA codes for practical parity-based SWC, we design a novel layered Wyner-Ziv video coder which is robust to channel failures and thus very suitable for wireless communications. Our simulation results show great advantages of the proposed solution based on joint source-channel coding compared to the traditional approach where source and channel coding are performed separately.

  20. The H.264/AVC advanced video coding standard: overview and introduction to the fidelity range extensions

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.; Topiwala, Pankaj N.; Luthra, Ajay

    2004-11-01

    H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.

  1. Analysis of the beam halo in negative ion sources by using 3D3V PIC code.

    PubMed

    Miyamoto, K; Nishioka, S; Goto, I; Hatayama, A; Hanada, M; Kojima, A; Hiratsuka, J

    2016-02-01

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result.

  2. Analysis of the beam halo in negative ion sources by using 3D3V PIC code.

    PubMed

    Miyamoto, K; Nishioka, S; Goto, I; Hatayama, A; Hanada, M; Kojima, A; Hiratsuka, J

    2016-02-01

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result. PMID:26932006

  3. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  4. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  5. Development of a 3D FEL code for the simulation of a high-gain harmonic generation experiment.

    SciTech Connect

    Biedron, S. G.

    1999-02-26

    Over the last few years, there has been a growing interest in self-amplified spontaneous emission (SASE) free-electron lasers (FELs) as a means for achieving a fourth-generation light source. In order to correctly and easily simulate the many configurations that have been suggested, such as multi-segmented wigglers and the method of high-gain harmonic generation, we have developed a robust three-dimensional code. The specifics of the code, the comparison to the linear theory as well as future plans will be presented.

  6. Rate quantization modeling for rate control of MPEG video coding and recording

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Liu, Bede

    1995-04-01

    For MPEG video coding and recording applications, it is important to select quantization parameters at slice and macroblock levels to produce nearly constant quality image for a given bit count budget. A well designed rate control strategy can improve overall image quality for video transmission over a constant-bit-rate channel and fulfill editing requirement of video recording, where a certain number of new pictures are encoded to replace consecutive frames on the storage media using at most the same number of bits. In this paper, we developed a feedback method with a rate-quantization model, which can be adapted to changes in picture activities. The model is used for quantization parameter selection at the frame and slice level. Extra computations needed are modest. Experiments show the accuracy of the model and the effectiveness of the proposed rate control method. A new bit allocation algorithm is then proposed for MPEG video coding.

  7. An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei

    2016-10-01

    Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

  8. Assessment of the 3-D Thermal-Hydraulic Nuclear Core Computer Code FLICA-IV on Rod Bundle Experiments

    SciTech Connect

    Bergeron, Andre; Caruge, Daniel; Clement, Philippe

    2001-04-15

    The physical validation compared with the hydraulic and two-phase flow experiments of the thermal-hydraulic FLICA-IV nuclear core computer code, in the case of a pressurized water reactor is presented. This three-dimensional two-phase flow code is devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores. The four balance equations used by the code and the closure relationships are first presented. Then, the facilities employed for the code validation are described. They are the ones that use either laser velocimetry techniques in the case of hydraulic validation to measure accurately the flow field around rods or isokinetic sampling to carry out the qualities and the axial mass velocities at the outlet of a rod bundle in the case of two-phase flow validation. Comparisons between experimental and computed values are then presented for the axial flow blockage simulation, inlet assemblies flow mixing, axial flow spacer grid disturbance, and an outlet rod bundle map of qualities and axial mass velocities.

  9. Modeling the physical structure of star-forming regions with LIME, a 3D radiative transfer code

    NASA Astrophysics Data System (ADS)

    Quénard, D.; Bottinelli, S.; Caux, E.

    2016-05-01

    The ability to predict line emission is crucial in order to make a comparison with observations. From LTE to full radiative transfer codes, the goal is always to derive the most accurately possible the physical properties of the source. Non-LTE calculations can be very time consuming but are needed in most of the cases since many studied regions are far from LTE.

  10. Time-Dependent Distribution Functions in C-Mod Calculated with the CQL3D-Hybrid-FOW, AORSA Full-Wave, and DC Lorentz Codes

    NASA Astrophysics Data System (ADS)

    Harvey, R. W. (Bob); Petrov, Yu. V.; Jaeger, E. F.; Berry, L. A.; Bonoli, P. T.; Bader, A.

    2015-11-01

    A time-dependent simulation of C-Mod pulsed ICRF power is made calculating minority hydrogen ion distribution functions with the CQL3D-Hybrid-FOW finite-orbit-width Fokker-Planck code. ICRF fields are calculated with the AORSA full wave code, and RF diffusion coefficients are obtained from these fields using the DC Lorentz gyro-orbit code. Prior results with a zero-banana-width simulation using the CQL3D/AORSA/DC time-cycles showed a pronounced enhancement of the H distribution in the perpendicular velocity direction compared to results obtained from Stix's quasilinear theory, in general agreement with experiment. The present study compares the new FOW results, including relevant gyro-radius effects, to determine the importance of these effects on the the NPA synthetic diagnostic time-dependence. The new NPA results give increased agreement with experiment, particularly in the ramp-down time after the ICRF pulse. Funded, through subcontract with Massachusetts Institute of Technology, by USDOE sponsored SciDAC Center for Simulation of Wave-Plasma Interactions.

  11. Initial Self-Consistent 3D Electron-Cloud Simulations of the LHC Beam with the Code WARP+POSINST

    SciTech Connect

    Vay, J; Furman, M A; Cohen, R H; Friedman, A; Grote, D P

    2005-10-11

    We present initial results for the self-consistent beam-cloud dynamics simulations for a sample LHC beam, using a newly developed set of modeling capability based on a merge [1] of the three-dimensional parallel Particle-In-Cell (PIC) accelerator code WARP [2] and the electron-cloud code POSINST [3]. Although the storage ring model we use as a test bed to contain the beam is much simpler and shorter than the LHC, its lattice elements are realistically modeled, as is the beam and the electron cloud dynamics. The simulated mechanisms for generation and absorption of the electrons at the walls are based on previously validated models available in POSINST [3, 4].

  12. TRAC code assessment using data from SCTF Core-III, a large-scale 2D/3D facility

    SciTech Connect

    Boyack, B.E.; Shire, P.R.; Harmony, S.C.; Rhee, G.

    1988-01-01

    Nine tests from the SCTF Core-III configuration have been analyzed using TRAC-PF1/MOD1. The objectives of these assessment activities were to obtain a better understanding of the phenomena occurring during the refill and reflood phases of a large-break loss-of-coolant accident, to determine the accuracy to which key parameters are calculated, and to identify deficiencies in key code correlations and models that provide closure for the differential equations defining thermal-hydraulic phenomena in pressurized water reactors. Overall, the agreement between calculated and measured values of peak cladding temperature is reasonable. In addition, TRAC adequately predicts many of the trends observed in both the integral effect and separate effect tests conducted in SCTF Core-III. The importance of assessment activities that consider potential contributors to discrepancies between the measured and calculated results arising from three sources are described as those related to (1) knowledge about the facility configuration and operation, (2) facility modeling for code input, and (3) deficiencies in code correlations and models. An example is provided. 8 refs., 7 figs., 2 tabs.

  13. Instability due to a two recirculation pump trip in a BWR using RAMONA-4B computer code with 3D neutron kinetics

    SciTech Connect

    Cheng, H.S.; Rohatgi, U.S.

    1993-06-01

    An investigation was made of the potential for thermal-hydraulic instabilities coupled to neutronic feedback in a BWR due to a two recirculation pump trip event using the RAMONA-4B computer code with 3D neutron kinetics. It is concluded that a high-power (100%) and low-flow (75%) initial condition would most likely lead to in-phase density wave oscillations after the tripping of both recirculation pumps, and that RAMONA-4B is capable of predicting such thermal-hydraulic instabilities coupled to neutronic feedback in BWR and in SBWR.

  14. Implementation and validation of a Reynolds stress model in the COMMIX-1C/RSM and CAPS-3D/RSM codes

    SciTech Connect

    Chang, F.C.; Bottoni, M.

    1995-08-01

    A Reynolds stress model (RSM) of turbulence, based on seven transport equations, has been linked to the COMMIX-1C/RSM and CAPS-3D/RSM computer codes. Six of the equations model the transport of the components of the Reynolds stress tensor and the seventh models the dissipation of turbulent kinetic energy. When a fluid is heated, four additional transport equations are used: three for the turbulent heat fluxes and one for the variance of temperature fluctuations. All of the analytical and numerical details of the implementation of the new turbulence model are documented. The model was verified by simulation of homogeneous turbulence.

  15. A fully-neoclassical finite-orbit-width version of the CQL3D Fokker-Planck code

    NASA Astrophysics Data System (ADS)

    Petrov, Yu V.; Harvey, R. W.

    2016-11-01

    The time-dependent bounce-averaged CQL3D flux-conservative finite-difference Fokker-Planck equation (FPE) solver has been upgraded to include finite-orbit-width (FOW) capabilities which are necessary for an accurate description of neoclassical transport, losses to the walls, and transfer of particles, momentum, and heat to the scrape-off layer. The FOW modifications are implemented in the formulation of the neutral beam source, collision operator, RF quasilinear diffusion operator, and in synthetic particle diagnostics. The collisional neoclassical radial transport appears naturally in the FOW version due to the orbit-averaging of local collision coefficients coupled with transformation coefficients from local (R, Z) coordinates along each guiding-center orbit to the corresponding midplane computational coordinates, where the FPE is solved. In a similar way, the local quasilinear RF diffusion terms give rise to additional radial transport of orbits. We note that the neoclassical results are obtained for ‘full’ orbits, not dependent on a common small orbit-width approximation. Results of validation tests for the FOW version are also presented.

  16. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  17. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  18. Real-time video streaming using H.264 scalable video coding (SVC) in multihomed mobile networks: a testbed approach

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2011-03-01

    Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.

  19. Development of a locally mass flux conservative computer code for calculating 3-D viscous flow in turbomachines

    NASA Technical Reports Server (NTRS)

    Walitt, L.

    1982-01-01

    The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.

  20. Memory bandwidth-scalable motion estimation for mobile video coding

    NASA Astrophysics Data System (ADS)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  1. Investigating the structure preserving encryption of high efficiency video coding (HEVC)

    NASA Astrophysics Data System (ADS)

    Shahid, Zafar; Puech, William

    2013-02-01

    This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.

  2. Customer oriented SNR scalability scheme for scalable video coding

    NASA Astrophysics Data System (ADS)

    Li, Z. G.; Rahardja, S.

    2005-07-01

    Let the whole region be the whole bit rate range that customers are interested in, and a sub-region be a specific bit rate range. The weighting factor of each sub-region is determined according to customers' interest. A new type of region of interest (ROI) is defined for the SNR scalability as the gap between the coding efficiency of SNR scalability scheme and that of the state-of-the-art single layer coding for a sub-region is a monotonically non-increasing function of its weighting factor. This type of ROI is used as a performance index to design a customer oriented SNR scalability scheme. Our scheme can be used to achieve an optimal customer oriented scalable tradeoff (COST). The profit can thus be maximized.

  3. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  4. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331

  5. Region-of-interest based rate control for UAV video coding

    NASA Astrophysics Data System (ADS)

    Zhao, Chun-lei; Dai, Ming; Xiong, Jing-ying

    2016-05-01

    To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R- λ model, the quantization parameter ( QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.

  6. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  7. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  8. Automatic network-adaptive ultra-low-bit-rate video coding

    NASA Astrophysics Data System (ADS)

    Chien, Wei-Jung; Lam, Tuyet-Trang; Abousleman, Glen P.; Karam, Lina J.

    2006-05-01

    This paper presents a software-only, real-time video coder/decoder (codec) for use with low-bandwidth channels where the bandwidth is unknown or varies with time. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths of less than 1 kbits/second. The encoder and decoder establish two virtual connections over a single IP-based communications link. The first connection is UDP/IP guaranteed throughput, which is used to transmit the compressed video stream in real time, while the second is TCP/IP guaranteed delivery, which is used for two-way control and compression parameter updating. The TCP/IP link serves as a virtual feedback channel and enables the decoder to instruct the encoder to throttle back the transmission bit rate in response to the measured packet loss ratio. It also enables either side to initiate on-the-fly parameter updates such as bit rate, frame rate, frame size, and correlation parameter, among others. The codec also incorporates frame-rate throttling whereby the number of frames decoded is adjusted based upon the available processing resources. Thus, the proposed codec is capable of automatically adjusting the transmission bit rate and decoding frame rate to adapt to any network scenario. Video coding results for a variety of network bandwidths and configurations are presented to illustrate the vast capabilities of the proposed video coding system.

  9. Fast intra-prediction algorithms for high efficiency video coding standard

    NASA Astrophysics Data System (ADS)

    Kibeya, Hassan; Belghith, Fatma; Ben Ayed, Mohammed Ali; Masmoudi, Nouri

    2016-01-01

    High efficiency video coding (HEVC) is the latest video compression standard that provides significant performance improvement on the compression ratio compared to all existing video coding standards. The intra-prediction procedure plays an important role in the HEVC encoder, and it is being achieved by providing up to 35 intra-modes with a larger coding unit requiring a high computational complexity that needs to be alleviated. Toward this end, the paper proposes two fast intra-mode decision algorithms that exploit the features of video sequences. First, an early detection of zero transform and quantified coefficients method is applied to generate threshold values employed for early termination of the intra-decision process and hence accelerates the encoding procedure. Another fast intra-mode decision algorithm is elaborated that relies on a refinement technique. Based on statistical analyses of frequently chosen modes, only a small part of the candidate modes is chosen for intra-prediction process, which reduces the complexity of the intra-encoding procedure. The performance of the proposed algorithms is verified through comparative analysis of encoding time, visual image quality, and compression ratio. Compared to HM 10.0, the encoding time reduction can reach 69% with only a slight degradation of image quality and compression ratio.

  10. MNSR transient analyses and thermal hydraulic safety margins for HEU and LEU cores using the RELAP5-3D code

    SciTech Connect

    Dunn, F.E.; Thomas, J.; Liaw, J.; Matos, J.E.

    2008-07-15

    For safety analyses to support conversion of MNSR reactors from HEU fuel to LEU fuel, a RELAP5-3D model was set up to simulate the entire MNSR system. This model includes the core, the beryllium reflectors, the water in the tank and the water in the surrounding pool. The MCNP code was used to obtain the power distributions in the core and to obtain reactivity feedback coefficients for the transient analyses. The RELAP5-3D model was validated by comparing measured and calculated data for the NIRR-1 reactor in Nigeria. Comparisons include normal operation at constant power and a 3.77 mk rod withdrawal transient. Excellent agreement was obtained for core coolant inlet and outlet temperatures for operation at constant power, and for power level, coolant inlet temperature, and coolant outlet temperature for the rod withdrawal transient. In addition to the negative reactivity feedbacks from increasing core moderator and fuel temperatures, it was necessary to calculate and include positive reactivity feedback from temperature changes in the radial beryllium reflector and changes in the temperature and density of the water in the tank above the core and at the side of the core. The validated RELAP5-3D model was then used to analyze 3.77 mk rod withdrawal transients for LEU cores with two UO{sub 2} fuel pin designs. The impact of cracking of oxide LEU fuel is discussed. In addition, steady-state power operation at elevated power levels was evaluated to determine steady-state safety margins for onset of nucleate boiling and for onset of significant voiding. (author)

  11. Modeling the Backscatter and Transmitted Light of High Power Smoothed Beams with pF3D, a Massively Parallel Laser Plasma Interaction Code

    SciTech Connect

    Berger, R.L.; Divol, L.; Glenzer, S.; Hinkel, D.E.; Kirkwood, R.K.; Langdon, A.B.; Moody, J.D.; Still, C.H.; Suter, L.; Williams, E.A.; Young, P.E.

    2000-06-01

    Using the three-dimensional wave propagation code, F3D[Berger et al., Phys. Fluids B 5,2243 (1993), Berger et al., Phys. Plasmas 5,4337(1998)], and the massively parallel version pF3D, [Still et al. Phys. Plasmas 7 (2000)], we have computed the transmitted and reflected light for laser and plasma conditions in experiments that simulated ignition hohlraum conditions. The frequency spectrum and the wavenumber spectrum of the transmitted light are calculated and used to identify the relative contributions of stimulated forward Brillouin and self-focusing in hydrocarbon-filled balloons, commonly called gasbags. The effect of beam smoothing, smoothing by spectral dispersion (SSD) and polarization smoothing (PS), on the stimulated Brillouin backscatter (SBS) from Scale-1 NOVA hohlraums was simulated with the use nonlinear saturation models that limit the amplitude of the driven acoustic waves. Other experiments on CO{sub 2} gasbags simultaneously measure at a range of intensities the SBS reflectivity and the Thomson scatter from the SBS-driven acoustic waves that provide a more detailed test of the modeling. These calculations also predict that the backscattered light will be very nonuniform in the nearfield (the focusing system optics) which is important for specifying the backscatter intensities be tolerated by the National Ignition Facility laser system.

  12. Validation of 3D Code KATRIN For Fast Neutron Fluence Calculation of VVER-1000 Reactor Pressure Vessel by Ex-Vessel Measurements and Surveillance Specimens Results

    NASA Astrophysics Data System (ADS)

    Dzhalandinov, A.; Tsofin, V.; Kochkin, V.; Panferov, P.; Timofeev, A.; Reshetnikov, A.; Makhotin, D.; Erak, D.; Voloschenko, A.

    2016-02-01

    Usually the synthesis of two-dimensional and one-dimensional discrete ordinate calculations is used to evaluate neutron fluence on VVER-1000 reactor pressure vessel (RPV) for prognosis of radiation embrittlement. But there are some cases when this approach is not applicable. For example the latest projects of VVER-1000 have upgraded surveillance program. Containers with surveillance specimens are located on the inner surface of RPV with fast neutron flux maximum. Therefore, the synthesis approach is not suitable enough for calculation of local disturbance of neutron field in RPV inner surface behind the surveillance specimens because of their complicated and heterogeneous structure. In some cases the VVER-1000 core loading consists of fuel assemblies with different fuel height and the applicability of synthesis approach is also ambiguous for these fuel cycles. Also, the synthesis approach is not enough correct for the neutron fluence estimation at the RPV area above core top. Because of these reasons only the 3D neutron transport codes seem to be satisfactory for calculation of neutron fluence on the VVER-1000 RPV. The direct 3D calculations are also recommended by modern regulations.

  13. Validation of the RPLUS3D Code for Supersonic Inlet Applications Involving Three-Dimensional Shock Wave-Boundary Layer Interactions

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A three-dimensional computational fluid dynamics code, RPLUS3D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for glancing shock wave-boundary layer interactions. Both laminar and turbulent flows were studied. A supersonic flow over a wedge mounted on a flat plate was numerically simulated. For the laminar case, the static pressure distribution, velocity vectors, and particle traces on the flat plate were obtained. For turbulent flow, both the Baldwin-Lomax and Chien two-equation turbulent models were used. The static pressure distributions, pitot pressure, and yaw angle profiles were computed. In addition, the velocity vectors and particle traces on the flat plate were also obtained from the computed solution. Overall, the computed results for both laminar and turbulent cases compared very well with the experimentally obtained data.

  14. Chemical oxygen-iodine laser (COIL) beam quality predictions using 3D Navier-Stokes (MINT) and wave optics (OCELOT) codes

    NASA Astrophysics Data System (ADS)

    Lampson, Alan I.; Plummer, David N.; Erkkila, John H.; Crowell, Peter G.; Helms, Charles A.

    1998-05-01

    This paper describes a series of analyses using the 3-d MINT Navier-Stokes and OCELOT wave optics codes to calculate beam quality in a COIL laser cavity. To make this analysis tractable, the problem was broken into two contributions to the medium quality; that associated with microscale disturbances primarily from the transverse iodine injectors, and that associated with the macroscale including boundary layers and shock-like effects. Results for both microscale and macroscale medium quality are presented for the baseline layer operating point in terms of single pass wavefront error. These results show that the microscale optical path difference effects are 1D in nature and of low spatial order. The COIL medium quality is shown to be dominated by macroscale effects; primarily pressure waves generated from flow/boundary layer interactions on the cavity shrouds.

  15. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  16. An effective packetization algorithm of LT codes for stable video streaming over wireless network

    NASA Astrophysics Data System (ADS)

    Lee, Dongju; Kim, Wan; Song, Hwangjun

    2011-09-01

    In this work, we propose an effective And-Or tree based packetization algorithm of Luby Transform (LT) codes to provide stable video streaming services by minimizing the deterioration of video streaming service quality caused by lost packets over error-prone wireless network. To accomplish our goal, the proposed packetization algorithm considers the relationships among encoded symbols of LT codes based on an And-Or tree analysis tool, and then puts the these encoded symbols into packets to minimize the packet loss effect during packet transmission and improve the decoding success rate of LT codes by reducing the correlations among packets. We conduct a mathematical analysis to prove performance of our packetization algorithm of LT codes compared with conventional packetization algorithm. Finally, the proposed system is fully implemented in Java and C/C++, and widely tested to show that the proposed packetization algorithm works reasonably well. The experimental results are provided to demonstrate that the proposed packetization algorithm supports more stable video streaming services with higher peak signal-to-nose ratio (PSNR) than the conventional packetization algorithm with various packet loss patterns, including random and burst packet loss patterns.

  17. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images.

    PubMed

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2013-11-21

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 10(8) primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  18. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  19. Impact of event-specific chorus wave realization for modeling the October 8-9, 2012, event using the LANL DREAM3D diffusion code

    NASA Astrophysics Data System (ADS)

    Cunningham, G.; Tu, W.; Chen, Y.; Reeves, G. D.; Henderson, M. G.; Baker, D. N.; Blake, J. B.; Spence, H.

    2013-12-01

    During the interval October 8-9, 2012, the phase-space density (PSD) of high-energy electrons exhibited a dropout preceding an intense enhancement observed by the MagEIS and REPT instruments aboard the Van Allen Probes. The evolution of the PSD suggests heating by chorus waves, which were observed to have high intensities at the time of the enhancement [1]. Although intense chorus waves were also observed during the first Dst dip on October 8, no PSD enhancement was observed at this time. We demonstrate a quantitative reproduction of the entire event that makes use of three recent modifications to the LANL DREAM3D diffusion code: 1) incorporation of a time-dependent, low-energy, boundary condition from the MagEIS instrument, 2) use of a time-dependent estimate of the chorus wave intensity derived from observations of POES low-energy electron precipitation, and 3) use of an estimate of the last closed drift shell, beyond which electrons are assumed to have a lifetime that is proportional to their drift period around earth. The key features of the event are quantitatively reproduced by the simulation, including the dropout on October 8, and a rapid increase in PSD early on October 9, with a peak near L*=4.2. The DREAM3D code predicts the dropout on October 8 because this feature is dominated by magnetospheric compression and outward radial diffusion-the L* of the last closed drift-shell reaches a minimum value of 5.33 at 1026 UT on October 8. We find that a ';statistical' wave model based on historical CRRES measurements binned in AE* does not reproduce the enhancement because the peak wave amplitudes are only a few 10's of pT, whereas an ';event-specific' model reproduces both the magnitude and timing of the enhancement very well, a s shown in the Figure, because the peak wave amplitudes are 10x higher. [1] 'Electron Acceleration in the Heart of the Van Allen Radiation Belts', G. D. Reeves et al., Science 1237743, Published online 25 July 2013 [DOI:10.1126/science

  20. STEALTH: a Lagrange explicit finite-difference code for solid, structural, and thermohydraulic analysis. Volume 8B. STEALTH/WHAMSE: a 3-D fluid-structure interaction code

    SciTech Connect

    Not Available

    1984-10-01

    STEALTH is a family of computer codes that can be used to calculate a variety of physical processes in which the dynamic behavior of a continuum is involved. The version of STEALTH described in this volume is designed for calculations of fluid-structure interaction. This version of the program consists of a hydrodynamic version of STEALTH which has been coupled to a finite-element code, WHAMSE. STEALTH computes the transient response of the fluid continuum, while WHAMSE computes the transient response of shell and beam structures under external fluid loadings. The coupling between STEALTH and WHAMSE is performed during each cycle or step of a calculation. Separate calculations of fluid response and structure response are avoided, thereby giving a more accurate model of the dynamic coupling between fluid and structure. This volume provides the theoretical background, the finite-difference equations, the finite-element equations, a discussion of several sample problems, a listing of the input decks for the sample problems, a programmer's manual and a description of the input records for the STEALTH/WHAMSE computer program.

  1. MAP3D: a media processor approach for high-end 3D graphics

    NASA Astrophysics Data System (ADS)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  2. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  3. Development of the 3D Parallel Particle-In-Cell Code IMPACT to Simulate the Ion Beam Transport System of VENUS (Abstract)

    NASA Astrophysics Data System (ADS)

    Qiang, J.; Leitner, D.; Todd, D. S.; Ryne, R. D.

    2005-03-01

    The superconducting ECR ion source VENUS serves as the prototype injector ion source for the Rare Isotope Accelerator (RIA) driver linac. The RIA driver linac requires a great variety of high charge state ion beams with up to an order of magnitude higher intensity than currently achievable with conventional ECR ion sources. In order to design the beam line optics of the low energy beam line for the RIA front end for the wide parameter range required for the RIA driver accelerator, reliable simulations of the ion beam extraction from the ECR ion source through the ion mass analyzing system are essential. The RIA low energy beam transport line must be able to transport intense beams (up to 10 mA) of light and heavy ions at 30 keV. For this purpose, LBNL is developing the parallel 3D particle-in-cell code IMPACT to simulate the ion beam transport from the ECR extraction aperture through the analyzing section of the low energy transport system. IMPACT, a parallel, particle-in-cell code, is currently used to model the superconducting RF linac section of RIA and is being modified in order to simulate DC beams from the ECR ion source extraction. By using the high performance of parallel supercomputing we will be able to account consistently for the changing space charge in the extraction region and the analyzing section. A progress report and early results in the modeling of the VENUS source will be presented.

  4. Development of the 3D Parallel Particle-In-Cell Code IMPACT to Simulate the Ion Beam Transport System of VENUS (Abstract)

    SciTech Connect

    Qiang, J.; Leitner, D.; Todd, D.S.; Ryne, R.D.

    2005-03-15

    The superconducting ECR ion source VENUS serves as the prototype injector ion source for the Rare Isotope Accelerator (RIA) driver linac. The RIA driver linac requires a great variety of high charge state ion beams with up to an order of magnitude higher intensity than currently achievable with conventional ECR ion sources. In order to design the beam line optics of the low energy beam line for the RIA front end for the wide parameter range required for the RIA driver accelerator, reliable simulations of the ion beam extraction from the ECR ion source through the ion mass analyzing system are essential. The RIA low energy beam transport line must be able to transport intense beams (up to 10 mA) of light and heavy ions at 30 keV.For this purpose, LBNL is developing the parallel 3D particle-in-cell code IMPACT to simulate the ion beam transport from the ECR extraction aperture through the analyzing section of the low energy transport system. IMPACT, a parallel, particle-in-cell code, is currently used to model the superconducting RF linac section of RIA and is being modified in order to simulate DC beams from the ECR ion source extraction. By using the high performance of parallel supercomputing we will be able to account consistently for the changing space charge in the extraction region and the analyzing section. A progress report and early results in the modeling of the VENUS source will be presented.

  5. Implementation of agronomical and geochemical modules into a 3D groundwater code for assessing nitrate storage and transport through unconfined Chalk aquifer

    NASA Astrophysics Data System (ADS)

    Picot-Colbeaux, Géraldine; Devau, Nicolas; Thiéry, Dominique; Pettenati, Marie; Surdyk, Nicolas; Parmentier, Marc; Amraoui, Nadia; Crastes de Paulet, François; André, Laurent

    2016-04-01

    Chalk aquifer is the main water resource for domestic water supply in many parts in northern France. In same basin, groundwater is frequently affected by quality problems concerning nitrates. Often close to or above the drinking water standards, nitrate concentration in groundwater is mainly due to historical agriculture practices, combined with leakage and aquifer recharge through the vadose zone. The complexity of processes occurring into such an environment leads to take into account a lot of knowledge on agronomy, geochemistry and hydrogeology in order to understand, model and predict the spatiotemporal evolution of nitrate content and provide a decision support tool for the water producers and stakeholders. To succeed in this challenge, conceptual and numerical models representing accurately the Chalk aquifer specificity need to be developed. A multidisciplinary approach is developed to simulate storage and transport from the ground surface until groundwater. This involves a new agronomic module "NITRATE" (NItrogen TRansfer for Arable soil to groundwaTEr), a soil-crop model allowing to calculate nitrogen mass balance in arable soil, and the "PHREEQC" numerical code for geochemical calculations, both coupled with the 3D transient groundwater numerical code "MARTHE". Otherwise, new development achieved on MARTHE code allows the use of dual porosity and permeability calculations needed in the fissured Chalk aquifer context. This method concerning the integration of existing multi-disciplinary tools is a real challenge to reduce the number of parameters by selecting the relevant equations and simplifying the equations without altering the signal. The robustness and the validity of these numerical developments are tested step by step with several simulations constrained by climate forcing, land use and nitrogen inputs over several decades. In the first time, simulations are performed in a 1D vertical unsaturated soil column for representing experimental nitrates

  6. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  7. Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.

    PubMed

    Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay

    2013-07-01

    In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.

  8. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  9. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    NASA Astrophysics Data System (ADS)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  10. Comet assay in reconstructed 3D human epidermal skin models—investigation of intra- and inter-laboratory reproducibility with coded chemicals

    PubMed Central

    Pfuhler, Stefan

    2013-01-01

    Reconstructed 3D human epidermal skin models are being used increasingly for safety testing of chemicals. Based on EpiDerm™ tissues, an assay was developed in which the tissues were topically exposed to test chemicals for 3h followed by cell isolation and assessment of DNA damage using the comet assay. Inter-laboratory reproducibility of the 3D skin comet assay was initially demonstrated using two model genotoxic carcinogens, methyl methane sulfonate (MMS) and 4-nitroquinoline-n-oxide, and the results showed good concordance among three different laboratories and with in vivo data. In Phase 2 of the project, intra- and inter-laboratory reproducibility was investigated with five coded compounds with different genotoxicity liability tested at three different laboratories. For the genotoxic carcinogens MMS and N-ethyl-N-nitrosourea, all laboratories reported a dose-related and statistically significant increase (P < 0.05) in DNA damage in every experiment. For the genotoxic carcinogen, 2,4-diaminotoluene, the overall result from all laboratories showed a smaller, but significant genotoxic response (P < 0.05). For cyclohexanone (CHN) (non-genotoxic in vitro and in vivo, and non-carcinogenic), an increase compared to the solvent control acetone was observed only in one laboratory. However, the response was not dose related and CHN was judged negative overall, as was p-nitrophenol (p-NP) (genotoxic in vitro but not in vivo and non-carcinogenic), which was the only compound showing clear cytotoxic effects. For p-NP, significant DNA damage generally occurred only at doses that were substantially cytotoxic (>30% cell loss), and the overall response was comparable in all laboratories despite some differences in doses tested. The results of the collaborative study for the coded compounds were generally reproducible among the laboratories involved and intra-laboratory reproducibility was also good. These data indicate that the comet assay in EpiDerm™ skin models is a

  11. In-loop atom modulus quantization for matching pursuit and its application to video coding.

    PubMed

    De Vleeschouwer, Christophe; Zakhor, Avideh

    2003-01-01

    This paper provides a precise analytical study of the selection and modulus quantization of matching pursuit (MP) coefficients. We demonstrate that an optimal rate-distortion trade-off is achieved by selecting the atoms up to a quality-dependent threshold, and by defining the modulus quantizer in terms of that threshold. In doing so, we take into account quantization error re-injection resulting from inserting the modulus quantizer inside the MP atom computation loop. In-loop quantization not only improves coding performance, but also affects the optimal quantizer design for both uniform and nonuniform quantization. We measure the impact of our work in the context of video coding. For both uniform and nonuniform quantization, the precise understanding of the relation between atom selection and quantization results in significant improvements in terms of coding efficiency. At high bitrates, the proposed nonuniform quantization scheme results in 0.5 to 2 dB improvement over the previous method.

  12. A workflow for handling heterogeneous 3D models with the TOUGH2 family of codes: Applications to numerical modeling of CO 2 geological storage

    NASA Astrophysics Data System (ADS)

    Audigane, Pascal; Chiaberge, Christophe; Mathurin, Frédéric; Lions, Julie; Picot-Colbeaux, Géraldine

    2011-04-01

    This paper is addressed to the TOUGH2 user community. It presents a new tool for handling simulations run with the TOUGH2 code with specific application to CO 2 geological storage. This tool is composed of separate FORTRAN subroutines (or modules) that can be run independently, using input and output files in ASCII format for TOUGH2. These modules have been developed specifically for modeling of carbon dioxide geological storage and their use with TOUGH2 and the Equation of State module ECO2N, dedicated to CO 2-water-salt mixture systems, with TOUGHREACT, which is an adaptation of TOUGH2 with ECO2N and geochemical fluid-rock interactions, and with TOUGH2 and the EOS7C module dedicated to CO 2-CH 4 gas mixture is described. The objective is to save time for the pre-processing, execution and visualization of complex geometry for geological system representation. The workflow is rapid and user-friendly and future implementation to other TOUGH2 EOS modules for other contexts (e.g. nuclear waste disposal, geothermal production) is straightforward. Three examples are shown for validation: (i) leakage of CO 2 up through an abandoned well; (ii) 3D reactive transport modeling of CO 2 in a sandy aquifer formation in the Sleipner gas Field, (North Sea, Norway); and (iii) an estimation of enhanced gas recovery technology using CO 2 as the injected and stored gas to produce methane in the K12B Gas Field (North Sea, Denmark).

  13. A Novel Motion Field Anchoring Paradigm for Highly Scalable Wavelet-Based Video Coding.

    PubMed

    Rufenacht, Dominic; Mathew, Reji; Taubman, David

    2016-01-01

    Existing video coders anchor motion fields at frames that are to be predicted. In this paper, we demonstrate how changing the anchoring of motion fields to reference frames has some important advantages over conventional anchoring. We work with piecewise-smooth motion fields, and use breakpoints to signal discontinuities at moving object boundaries. We show how discontinuity information can be used to resolve double mappings arising when motion is warped from reference to target frames. We present an analytical model that allows to determine weights for texture, motion, and breakpoints to guide the rate-allocation for scalable encoding. Compared with the conventional way of anchoring motion fields, the proposed scheme requires fewer bits for the coding of motion; furthermore, the reconstructed video frames contain fewer ghosting artefacts. The experimental results show the superior performance compared with the traditional anchoring, and demonstrate the high scalability attributes of the proposed method.

  14. Protection of HEVC Video Delivery in Vehicular Networks with RaptorQ Codes

    PubMed Central

    Martínez-Rach, Miguel; López, Otoniel; Malumbres, Manuel Pérez

    2014-01-01

    With future vehicles equipped with processing capability, storage, and communications, vehicular networks will become a reality. A vast number of applications will arise that will make use of this connectivity. Some of them will be based on video streaming. In this paper we focus on HEVC video coding standard streaming in vehicular networks and how it deals with packet losses with the aid of RaptorQ, a Forward Error Correction scheme. As vehicular networks are packet loss prone networks, protection mechanisms are necessary if we want to guarantee a minimum level of quality of experience to the final user. We have run simulations to evaluate which configurations fit better in this type of scenarios. PMID:25136675

  15. 3D printing of soft and wet systems benefit from hard-to-soft transition of transparent shape memory gels (presentation video)

    NASA Astrophysics Data System (ADS)

    Furukawa, Hidemitsu; Gong, Jin; Makino, Masato; Kabir, Md. Hasnat

    2014-04-01

    Recently we successfully developed novel transparent shape memory gels. The SMG memorize their original shapes during the gelation process. In the room temperature, the SMG are elastic and show plasticity (yielding) under deformation. However when heated above about 50˚C, the SMG induce hard-to-soft transition and go back to their original shapes automatically. We focus on new soft and wet systems made of the SMG by 3-D printing technology.

  16. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    PubMed Central

    Li, Xin; Guo, Rui; Chen, Chao

    2014-01-01

    Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216

  17. Minimum distortion quantizer for fixed-rate 64-subband video coding

    NASA Astrophysics Data System (ADS)

    Alparone, Luciano; Andreadis, Alessandro; Argenti, Fabrizio; Benelli, Giuliano; Garzelli, Andrea; Tarchi, A.

    1995-02-01

    A motion-compensated sub-band coding (SBC) scheme for video signals, featuring fixed-rate and optimum quantizer, is presented. Block matching algorithm provides a suitable inter-frame prediction, and a 64 sub-band decomposition allows a high decorrelation of the motion- compensated difference field. The main drawback is that sub-bands containing sparse data of different statistics are produced, thus requiring run-length (RL) and variable length coding (VLC) for best performance. However, most digital communication channels operate at constant bit-rate (BR); hence, fixed-rate video coding is the main goal, in order to reduce buffering delays. The approach followed in this work is modeling the subbands as independent memoryless sources with generalized Gaussian PDFs and designing optimum uniform quantizers with the goal of minimizing distortion after a BR value, also accounting for the entropy of the RLs of zero/nonzero coefficients, has been specified. The problem is stated in terms of entropy allocation among sub-bands minimizing the overall distortion, analogously to optimal distortion allocation when fixed quality is requested. The constrained minimum is found by means of Lagrange multipliers, once the parametric PDFs have been assessed from true TV sequences. This procedure provides the optimum step for uniform quantization of each sub-band, thus leading to discarding some of the least significant ones.

  18. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  19. Evaluating the effectiveness of SW-only video coding for real-time video transmission over low-rate wireless networks

    NASA Astrophysics Data System (ADS)

    Bartolini, Franco; Pasquini, Cristina; Piva, Alessandro

    2001-04-01

    The recent development of video compression algorithms allowed the diffusion of systems for the transmission of video sequences over data networks. However, the transmission over error prone mobile communication channels is yet an open issue. In this paper, a system developed for the real time transmission of H263 video coded sequences over TETRA mobile networks is presented. TETRA is an open digital trunked radio standard defined by the European Telecommunications Standardization Institute developed for professional mobile radio users, providing full integration of voice and data services. Experimental tests demonstrate that, in spite of the low frame rate allowed by the SW only implementation of the decoder and by the low channel rate a video compression technique such as that complying with the H263 standard, is still preferable to a simpler but less effective frame based compression system.

  20. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  1. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  2. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  3. Tissue-plastinated vs. celloidin-embedded large serial sections in video, analog and digital photographic on-screen reproduction: a preliminary step to exact virtual 3D modelling, exemplified in the normal midface and cleft-lip and palate

    PubMed Central

    Landes, Constantin A; Weichert, Frank; Geis, Philipp; Wernstedt, Katrin; Wilde, Anja; Fritsch, Helga; Wagner, Mathias

    2005-01-01

    This study analyses tissue-plastinated vs. celloidin-embedded large serial sections, their inherent artefacts and aptitude with common video, analog or digital photographic on-screen reproduction. Subsequent virtual 3D microanatomical reconstruction will increase our knowledge of normal and pathological microanatomy for cleft-lip-palate (clp) reconstructive surgery. Of 18 fetal (six clp, 12 control) specimens, six randomized specimens (two clp) were BiodurE12-plastinated, sawn, burnished 90 µm thick transversely (five) or frontally (one), stained with azureII/methylene blue, and counterstained with basic-fuchsin (TP-AMF). Twelve remaining specimens (four clp) were celloidin-embedded, microtome-sectioned 75 µm thick transversely (ten) or frontally (two), and stained with haematoxylin–eosin (CE-HE). Computed-planimetry gauged artefacts, structure differentiation was compared with light microscopy on video, analog and digital photography. Total artefact was 0.9% (TP-AMF) and 2.1% (CE-HE); TP-AMF showed higher colour contrast, gamut and luminance, and CE-HE more red contrast, saturation and hue (P < 0.4). All (100%) structures of interest were light microscopically discerned, 83% on video, 76% on analog photography and 98% in digital photography. Computed image analysis assessed the greatest colour contrast, gamut, luminance and saturation on video; the most detailed, colour-balanced and sharpest images were obatined with digital photography (P < 0.02). TP-AMF retained spatial oversight, covered the entire area of interest and should be combined in different specimens with CE-HE which enables more refined muscle fibre reproduction. Digital photography is preferred for on-screen analysis. PMID:16050904

  4. Low-cost multi-hypothesis motion compensation for video coding

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Dong, Shengfu; Wang, Ronggang; Wang, Zhenyu; Ma, Siwei; Wang, Wenmin; Gao, Wen

    2014-02-01

    In conventional motion compensation, prediction block is related only with one motion vector for P frame. Multihypothesis motion compensation (MHMC) is proposed to improve the prediction performance of conventional motion compensation. However, multiple motion vectors have to be searched and coded for MHMC. In this paper, we propose a new low-cost multi-hypothesis motion compensation (LMHMC) scheme. In LMHMC, a block can be predicted from multiple-hypothesis with only one motion vector to be searched and coded into bit-stream, other motion vectors are predicted from motion vectors of neighboring blocks, and so both the encoding complexity and bit-rate of MHMC can be saved by our proposed LMHMC. By adding LMHMC as an additional mode in MPEG internet video coding (IVC) platform, the B-D rate saving is up to 10%, and the average B-D rate saving is close to 5% in Low Delay configure. We also compare the performance between MHMC and LMHMC in IVC platform, the performance of MHMC is improved about 2% on average by LMHMC.

  5. Optimal joint power-rate adaptation for error resilient video coding

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew

    2008-01-01

    In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.

  6. Pixel-level Matching Based Multi-hypothesis Error Concealment Modes for Wireless 3D H.264/MVC Communication

    NASA Astrophysics Data System (ADS)

    El-Shafai, Walid

    2015-09-01

    3D multi-view video (MVV) is multiple video streams shot by several cameras around a single scene simultaneously. Therefore it is an urgent task to achieve high 3D MVV compression to meet future bandwidth constraints while maintaining a high reception quality. 3D MVV coded bit-streams that are transmitted over wireless network can suffer from error propagation in the space, time and view domains. Error concealment (EC) algorithms have the advantage of improving the received 3D video quality without any modifications in the transmission rate or in the encoder hardware or software. To improve the quality of reconstructed 3D MVV, we propose an efficient adaptive EC algorithm with multi-hypothesis modes to conceal the erroneous Macro-Blocks (MBs) of intra-coded and inter-coded frames by exploiting the spatial, temporal and inter-view correlations between frames and views. Our proposed algorithm adapts to 3D MVV motion features and to the error locations. The lost MBs are optimally recovered by utilizing motion and disparity matching between frames and views on pixel-by-pixel matching basis. Our simulation results show that the proposed adaptive multi-hypothesis EC algorithm can significantly improve the objective and subjective 3D MVV quality.

  7. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  8. Status of ITU and ISO/MPEG4 video coding standards at very low bit-rates

    NASA Astrophysics Data System (ADS)

    Schaphorst, Richard; Reader, Cliff

    1994-05-01

    The goal of the ISO project, designated MPEG4, is to develop a generic video coding syntax suitable for a wide range of applications such as videophone via the PSTN and mobile radio, security systems, mobile experts, emergency monitoring, educational networks, and networked games. It is anticipated that the coding algorithm will be a significant advancement relative to the basic interframe predictive 8 X 8 DCT design which is used in most digital TV standards today. Examples of advanced coding techniques being considered include fractals, analysis/synthesis, knowledge-based, and semantic coding.

  9. A parallel algorithm for motion estimation in video coding using the bilinear transformation.

    PubMed

    Konstantopoulos, Charalampos

    2015-01-01

    Accurate motion estimation between frames is important for drastically reducing data redundancy in video coding. However, advanced motion estimation methods are computationally intensive and their execution in real time usually requires a parallel implementation. In this paper, we investigate the parallel implementation of such a motion estimation technique. Specifically, we present a parallel algorithm for motion estimation based on the bilinear transformation on the well-known parallel model of the hypercube network and formally prove the time and the space complexity of the proposed algorithm. We also show that the parallel algorithm can also run on other hypercubic networks, such as butterfly, cube-connected-cycles, shuffle-exchange or de Bruijn network with only constant slowdown.

  10. Validation of a new method for finding the rotational axes of the knee using both marker-based roentgen stereophotogrammetric analysis and 3D video-based motion analysis for kinematic measurements.

    PubMed

    Roland, Michelle; Hull, M L; Howell, S M

    2011-05-01

    In a previous paper, we reported the virtual axis finder, which is a new method for finding the rotational axes of the knee. The virtual axis finder was validated through simulations that were subject to limitations. Hence, the objective of the present study was to perform a mechanical validation with two measurement modalities: 3D video-based motion analysis and marker-based roentgen stereophotogrammetric analysis (RSA). A two rotational axis mechanism was developed, which simulated internal-external (or longitudinal) and flexion-extension (FE) rotations. The actual axes of rotation were known with respect to motion analysis and RSA markers within ± 0.0006 deg and ± 0.036 mm and ± 0.0001 deg and ± 0.016 mm, respectively. The orientation and position root mean squared errors for identifying the longitudinal rotation (LR) and FE axes with video-based motion analysis (0.26 deg, 0.28 m, 0.36 deg, and 0.25 mm, respectively) were smaller than with RSA (1.04 deg, 0.84 mm, 0.82 deg, and 0.32 mm, respectively). The random error or precision in the orientation and position was significantly better (p=0.01 and p=0.02, respectively) in identifying the LR axis with video-based motion analysis (0.23 deg and 0.24 mm) than with RSA (0.95 deg and 0.76 mm). There was no significant difference in the bias errors between measurement modalities. In comparing the mechanical validations to virtual validations, the virtual validations produced comparable errors to those of the mechanical validation. The only significant difference between the errors of the mechanical and virtual validations was the precision in the position of the LR axis while simulating video-based motion analysis (0.24 mm and 0.78 mm, p=0.019). These results indicate that video-based motion analysis with the equipment used in this study is the superior measurement modality for use with the virtual axis finder but both measurement modalities produce satisfactory results. The lack of significant differences between

  11. Video Traffic Characteristics of Modern Encoding Standards: H.264/AVC with SVC and MVC Extensions and H.265/HEVC

    PubMed Central

    2014-01-01

    Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC. PMID:24701145

  12. Reliability of Pre-Service Physical Education Teachers' Coding of Teaching Videos Using Studiocode[R] Analysis Software

    ERIC Educational Resources Information Center

    Prusak, Keven; Dye, Brigham; Graham, Charles; Graser, Susan

    2010-01-01

    This study examines the coding reliability and accuracy of pre-service teachers in a teaching methods class using digital video (DV)-based teaching episodes and Studiocode analysis software. Student self-analysis of DV footage may offer a high tech solution to common shortfalls of traditional systematic observation and reflection practices by…

  13. Time-dependent distribution functions and resulting synthetic NPA spectra in C-Mod calculated with the CQL3D-Hybrid-FOW, AORSA full-wave, and DC Lorentz codes

    NASA Astrophysics Data System (ADS)

    Harvey, R. W.; Petrov, Yu.; Jaeger, E. F.; Berry, L. A.; Bonoli, P. T.; Bader, A.

    2015-12-01

    A time-dependent simulation of C-Mod pulsed TCRF power is made obtaining minority hydrogen ion distributions with the CQL3D-Hybrid-FOW finite-orbit-width Fokker-Planck code. Cyclotron-resonant TCRF fields are calculated with the AORSA full wave code. The RF diffusion coefficients used in CQL3D are obtained with the DC Lorentz gyro-orbit code for perturbed particle trajectories in the combined equilibrium and TCRF electromagnetic fields. Prior results with a zero-banana-width simulation using the CQL3D/AORSA/DC time-cycles showed a pronounced enhancement of the H distribution in the perpendicular velocity direction compared to results obtained from Stix's quasilinear theory, and this substantially increased the rampup rate of the observed vertically-viewed neutral particle analyzer (NPA) flux, in general agreement with experiment. However, ramp down of the NPA flux after the pulse, remained long compared to the experiment. The present study compares the new FOW results, including relevant gyro-radius effects, to determine the importance of these new effects on the the NPA time-dependence.

  14. Parallel tree code for large N-body simulation: Dynamic load balance and data distribution on a CRAY T3D system

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Ansaloni, R.; Antonuccio-Delogu, V.; Erbacci, G.; Gambera, M.; Pagliaro, A.

    1997-10-01

    N-body algorithms for long-range unscreened interactions like gravity belong to a class of highly irregular problems whose optimal solution is a challenging task for present-day massively parallel computers. In this paper we describe a strategy for optimal memory and work distribution which we have applied to our parallel implementation of the Barnes & Hut (1986) recursive tree scheme on a Cray T3D using the CRAFT programming environment. We have performed a series of tests to find an optimal data distribution in the T3D memory, and to identify a strategy for the Dynamic Load Balance in order to obtain good performances when running large simulations (more than 10 million particles). The results of tests show that the step duration depends on two main factors: the data locality and the T3D network contention. Increasing data locality we are able to minimize the step duration if the closest bodies (direct interaction) tend to be located in the same PE local memory (contiguous block subdivision, high granularity), whereas the tree properties have a fine grain distribution. In a very large simulation, due to network contention, an unbalanced load arises. To remedy this we have devised an automatic work redistribution mechanism which provided a good Dynamic Load Balance at the price of an insignificant overhead.

  15. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  16. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  17. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  18. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  19. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  20. Fast Mode Decision in the HEVC Video Coding Standard by Exploiting Region with Dominated Motion and Saliency Features.

    PubMed

    Podder, Pallab Kanti; Paul, Manoranjan; Murshed, Manzur

    2016-01-01

    The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences. PMID:26963813

  1. Real-time high-resolution downsampling algorithm on many-core processor for spatially scalable video coding

    NASA Astrophysics Data System (ADS)

    Buhari, Adamu Muhammad; Ling, Huo-Chong; Baskaran, Vishnu Monn; Wong, KokSheik

    2015-01-01

    The progression toward spatially scalable video coding (SVC) solutions for ubiquitous endpoint systems introduces challenges to sustain real-time frame rates in downsampling high-resolution videos into multiple layers. In addressing these challenges, we put forward a hardware accelerated downsampling algorithm on a parallel computing platform. First, we investigate the principal architecture of a serial downsampling algorithm in the Joint-Scalable-Video-Model reference software to identify the performance limitations for spatially SVC. Then, a parallel multicore-based downsampling algorithm is studied as a benchmark. Experimental results for this algorithm using an 8-core processor exhibit performance speedup of 5.25× against the serial algorithm in downsampling a quantum extended graphics array at 1536p video resolution into three lower resolution layers (i.e., Full-HD at 1080p, HD at 720p, and Quarter-HD at 540p). However, the achieved speedup here does not translate into the minimum required frame rate of 15 frames per second (fps) for real-time video processing. To improve the speedup, a many-core based downsampling algorithm using the compute unified device architecture parallel computing platform is proposed. The proposed algorithm increases the performance speedup to 26.14× against the serial algorithm. Crucially, the proposed algorithm exceeds the target frame rate of 15 fps, which in turn is advantageous to the overall performance of the video encoding process.

  2. Subgraphs Matching-Based Side Information Generation for Distributed Multiview Video Coding

    NASA Astrophysics Data System (ADS)

    Xiong, Hongkai; Lv, Hui; Zhang, Yongsheng; Song, Li; He, Zhihai; Chen, Tsuhan

    2010-12-01

    We adopt constrained relaxation for distributed multiview video coding (DMVC). The novel framework integrates the graph-based segmentation and matching to generate interview correlated side information without knowing the camera parameters, inspired by subgraph semantics and sparse decomposition of high-dimensional scale invariant feature data. The sparse data as a good hypothesis space aim for a best matching optimization of interview side information with compact syndromes, from inferred relaxed coset. The plausible filling-in from a priori feature constraints between neighboring views could reinforce a promising compensation to interview side-information generation for joint multiview decoding. The graph-based representations of multiview images are adopted as constrained relaxation, which assists the interview correlation matching for subgraph semantics of the original Wyner-Ziv image by the graph-based image segmentation and the associated scale invariant feature detector MSER (maximally stable extremal regions) and descriptor SIFT (scale-invariant feature transform). In order to find a distinctive feature matching with a more stable approximation, linear (PCA-SIFT) and nonlinear projections (Locally linear embedding) are adopted to reduce the dimension SIFT descriptors, and TPS (thin plate spline) warping model is to catch a more accurate interview motion model. The experimental results validate the high-estimation precision and the rate-distortion improvements.

  3. Reconstruction for distributed video coding: a Markov random field approach with context-adaptive smoothness prior

    NASA Astrophysics Data System (ADS)

    Zhang, Yongsheng; Xiong, Hongkai; He, Zhihai; Yu, Songyu

    2010-07-01

    An important issue in Wyner-Ziv video coding is the reconstruction of Wyner-Ziv frames with decoded bit-planes. So far, there are two major approaches: the Maximum a Posteriori (MAP) reconstruction and the Minimum Mean Square Error (MMSE) reconstruction algorithms. However, these approaches do not exploit smoothness constraints in natural images. In this paper, we model a Wyner-Ziv frame by Markov random fields (MRFs), and produce reconstruction results by finding an MAP estimation of the MRF model. In the MRF model, the energy function consists of two terms: a data term, MSE distortion metric in this paper, measuring the statistical correlation between side-information and the source, and a smoothness term enforcing spatial coherence. In order to better describe the spatial constraints of images, we propose a context-adaptive smoothness term by analyzing the correspondence between the output of Slepian-Wolf decoding and successive frames available at decoders. The significance of the smoothness term varies in accordance with the spatial variation within different regions. To some extent, the proposed approach is an extension to the MAP and MMSE approaches by exploiting the intrinsic smoothness characteristic of natural images. Experimental results demonstrate a considerable performance gain compared with the MAP and MMSE approaches.

  4. Using game theory for perceptual tuned rate control algorithm in video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  5. Bit allocation algorithm with novel view synthesis distortion model for multiview video plus depth coding.

    PubMed

    Chung, Tae-Young; Sim, Jae-Young; Kim, Chang-Su

    2014-08-01

    An efficient bit allocation algorithm based on a novel view synthesis distortion model is proposed for the rate-distortion optimized coding of multiview video plus depth sequences in this paper. We decompose an input frame into nonedge blocks and edge blocks. For each nonedge block, we linearly approximate its texture and disparity values, and derive a view synthesis distortion model, which quantifies the impacts of the texture and depth distortions on the qualities of synthesized virtual views. On the other hand, for each edge block, we use its texture and disparity gradients for the distortion model. In addition, we formulate a bit-rate allocation problem in terms of the quantization parameters for texture and depth data. By solving the problem, we can optimally divide a limited bit budget between the texture and depth data, in order to maximize the qualities of synthesized virtual views, as well as those of encoded real views. Experimental results demonstrate that the proposed algorithm yields the average PSNR gains of 1.98 and 2.04 dB in two-view and three-view scenarios, respectively, as compared with a benchmark conventional algorithm.

  6. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  7. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  8. GPM 3D Flyby Video of Lester

    NASA Video Gallery

    On Aug. 25, GPM found rain was falling at a rate of over 54 mm (2.1 inches) per hour in rain bands east of Lester's center. Cloud top heights were reaching about 12km (7.4 miles) in the tallest sto...

  9. Benchmarking of calculated projectile fragmentation cross-sections using the 3-D, MC codes PHITS, FLUKA, HETC-HEDS, MCNPX_HI, and NUCFRG2

    NASA Astrophysics Data System (ADS)

    Sihver, L.; Mancusi, D.; Niita, K.; Sato, T.; Townsend, L.; Farmer, C.; Pinsky, L.; Ferrari, A.; Cerutti, F.; Gomes, I.

    Particles and heavy ions are used in various fields of nuclear physics, medical physics, and material science, and their interactions with different media, including human tissue and critical organs, have therefore carefully been investigated both experimentally and theoretically since the 1930s. However, heavy-ion transport includes many complex processes and measurements for all possible systems, including critical organs, would be impractical or too expensive; e.g. direct measurements of dose equivalents to critical organs in humans cannot be performed. A reliable and accurate particle and heavy-ion transport code is therefore an essential tool in the design study of accelerator facilities as well as for other various applications. Recently, new applications have also arisen within transmutation and reactor science, space and medicine, especially radiotherapy, and several accelerator facilities are operating or planned for construction. Accurate knowledge of the physics of interaction of particles and heavy ions is also necessary for estimating radiation damage to equipment used on space vehicles, to calculate the transport of the heavy ions in the galactic cosmic ray (GCR) through the interstellar medium, and the evolution of the heavier elements after the Big Bang. Concerns about the biological effect of space radiation and space dosimetry are increasing rapidly due to the perspective of long-duration astronaut missions, both in relation to the International Space Station and to manned interplanetary missions in near future. Radiation protection studies for crews of international flights at high altitude have also received considerable attention in recent years. There is therefore a need to develop accurate and reliable particle and heavy-ion transport codes. To be able to calculate complex geometries, including production and transport of protons, neutrons, and alpha particles, 3-dimensional transport using Monte Carlo (MC) technique must be used. Today

  10. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  11. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  12. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  13. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    SciTech Connect

    Mirin, A.A.

    1988-07-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction.

  14. Full vector (3-D) inflow simulation in natural and wind farm environments using an expanded version of the SNLWIND (Veers) turbulence code

    SciTech Connect

    Kelley, N.D.

    1992-11-01

    We have recently expanded the numerical turbulence simulation (SNLWIND) developed by Veers [1] to include all three components of the turbulent wind vector. We have also configured the code to simulate the characteristics of turbulent wind fields upwind and downwind of a large wind farm, as well as over uniform, flat terrain. Veers`s original method only simulates the longitudinal component of the wind in neutral flow. This paper overviews the development of spectral distribution, spatial coherence, and cross correlation models used to expired the SNLWIND code to include the three components of the turbulent wind over a range of atmospheric stabilities. These models are based on extensive measurements of the turbulence characteristics immediately upwind and downwind of a large wind farm in San Gorgonio Pass, California.

  15. Full vector (3-D) inflow simulation in natural and wind farm environments using an expanded version of the SNLWIND (Veers) turbulence code

    NASA Astrophysics Data System (ADS)

    Kelley, N. D.

    1992-11-01

    We have recently expanded the numerical turbulence simulation (SNLWIND) developed by Veers to include all three components of the turbulent wind vector. We have also configured the code to simulate the characteristics of turbulent wind fields upwind and downwind of a large wind farm, as well as over uniform, flat terrain. Veers's original method only simulates the longitudinal component of the wind in neutral flow. This paper overviews the development of spectral distribution, spatial coherence, and cross correlation models used to expired the SNLWIND code to include the three components of the turbulent wind over a range of atmospheric stabilities. These models are based on extensive measurements of the turbulence characteristics immediately upwind and downwind of a large wind farm in San Gorgonio Pass, California.

  16. Comparison of the 3-D Deterministic Neutron Transport Code Attila® To Measure Data, MCNP And MCNPX For The Advanced Test Reactor

    SciTech Connect

    D. Scott Lucas; D. S. Lucas

    2005-09-01

    An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to apply the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the development of Attila models for ATR, capabilities of Attila, the generation and use of different cross-section libraries, and comparisons to ATR data, MCNP, MCNPX and future applications.

  17. RELAP5-3D User Problems

    SciTech Connect

    Riemke, Richard Allan

    2002-09-01

    The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.

  18. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  19. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  20. SHAPEMOL: a 3D code for calculating CO line emission in planetary and protoplanetary nebulae. Detailed model-fitting of the complex nebula NGC 6302

    NASA Astrophysics Data System (ADS)

    Santander-García, M.; Bujarrabal, V.; Koning, N.; Steffen, W.

    2015-01-01

    Context. Modern instrumentation in radioastronomy constitutes a valuable tool for studying the Universe: ALMA has reached unprecedented sensitivities and spatial resolution, while Herschel/HIFI has opened a new window (most of the sub-mm and far-infrared ranges are only accessible from space) for probing molecular warm gas (~50-1000 K). On the other hand, the software SHAPE has emerged in the past few years as a standard tool for determining the morphology and velocity field of different kinds of gaseous emission nebulae via spatio-kinematical modelling. Standard SHAPE implements radiative transfer solving, but it is only available for atomic species and not for molecules. Aims: Being aware of the growing importance of the development of tools for simplifying the analyses of molecular data from new-era observatories, we introduce the computer code shapemol, a complement to SHAPE, with which we intend to fill the so-far under-developed molecular niche. Methods: shapemol enables user-friendly, spatio-kinematic modelling with accurate non-LTE calculations of excitation and radiative transfer in CO lines. Currently, it allows radiative transfer solving in the 12CO and 13CO J = 1-0 to J = 17-16 lines, but its implementation permits easily extending the code to different transitions and other molecular species, either by the code developers or by the user. Used along SHAPE, shapemol allows easily generating synthetic maps to test against interferometric observations, as well as synthetic line profiles to match single-dish observations. Results: We give a full description of how shapemol works, and we discuss its limitations and the sources of uncertainty to be expected in the final synthetic profiles or maps. As an example of the power and versatility of shapemol, we build a model of the molecular envelope of the planetary nebula NGC 6302 and compare it with 12CO and 13CO J = 2-1 interferometric maps from SMA and high-J transitions from Herschel/HIFI. We find the

  1. HST3D; a computer code for simulation of heat and solute transport in three-dimensional ground-water flow systems

    USGS Publications Warehouse

    Kipp, K.L.

    1987-01-01

    The Heat- and Soil-Transport Program (HST3D) simulates groundwater flow and associated heat and solute transport in three dimensions. The three governing equations are coupled through the interstitial pore velocity, the dependence of the fluid density on pressure, temperature, the solute-mass fraction , and the dependence of the fluid viscosity on temperature and solute-mass fraction. The solute transport equation is for only a single, solute species with possible linear equilibrium sorption and linear decay. Finite difference techniques are used to discretize the governing equations using a point-distributed grid. The flow-, heat- and solute-transport equations are solved , in turn, after a particle Gauss-reduction scheme is used to modify them. The modified equations are more tightly coupled and have better stability for the numerical solutions. The basic source-sink term represents wells. A complex well flow model may be used to simulate specified flow rate and pressure conditions at the land surface or within the aquifer, with or without pressure and flow rate constraints. Boundary condition types offered include specified value, specified flux, leakage, heat conduction, and approximate free surface, and two types of aquifer influence functions. All boundary conditions can be functions of time. Two techniques are available for solution of the finite difference matrix equations. One technique is a direct-elimination solver, using equations reordered by alternating diagonal planes. The other technique is an iterative solver, using two-line successive over-relaxation. A restart option is available for storing intermediate results and restarting the simulation at an intermediate time with modified boundary conditions. This feature also can be used as protection against computer system failure. Data input and output may be in metric (SI) units or inch-pound units. Output may include tables of dependent variables and parameters, zoned-contour maps, and plots of the

  2. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  3. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation. PMID:25805426

  4. Static & Dynamic Response of 3D Solids

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  5. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  6. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    SciTech Connect

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-07-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k{sub eff} calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  7. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  8. An overview of new video coding tools under consideration for VP10: the successor to VP9

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu

    2015-09-01

    Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.

  9. Suppression of SRS induced crosstalk in RF-video overlay TWDM-PON system using dicode coding.

    PubMed

    Li, Jun; Bi, Meihua; He, Hao; Hu, Weisheng

    2014-09-01

    In this paper, we investigate the nonlinear Raman crosstalk in RF-video overlay time and wavelength division multiplexed passive optical network (TWDM-PON), and propose a novel spectrum-reshaping method based on dicode coding to mitigate this crosstalk. The dicode coding features ultra-low power spectral density in the low frequency region, which can reduce the nonlinear Raman crosstalk on the RF-video signal effectively. Experimental results show that, compared with traditional non-return-to-zero on-off keying (NRZ-OOK) signals, the crosstalk on RF-video signal can be reduced by 10 ~14 dB when the launch power per TWDM-PON channel varies from 10-dBm to 15-dBm. The transmission of 10-Gb/s dicode signal over 20-km standard single mode fiber (SSMF) is also demonstrated with the receiver sensitivity of -31 dBm at bit error ratio (BER) of 3.8e-3.

  10. Explicit 3-D Hydrodynamic FEM Program

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, includingmore » frictional sliding, single surface contact and automatic contact generation.« less

  11. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  12. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  13. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  14. Regional bit allocation and rate distortion optimization for multiview depth video coding with view synthesis distortion model.

    PubMed

    Zhang, Yun; Kwong, Sam; Xu, Long; Hu, Sudeng; Jiang, Gangyi; Kuo, C-C Jay

    2013-09-01

    In this paper, we propose a view synthesis distortion model (VSDM) that establishes the relationship between depth distortion and view synthesis distortion for the regions with different characteristics: color texture area corresponding depth (CTAD) region and color smooth area corresponding depth (CSAD), respectively. With this VSDM, we propose regional bit allocation (RBA) and rate distortion optimization (RDO) algorithms for multiview depth video coding (MDVC) by allocating more bits on CTAD for rendering quality and fewer bits on CSAD for compression efficiency. Experimental results show that the proposed VSDM based RBA and RDO can improve the coding efficiency significantly for the test sequences. In addition, for the proposed overall MDVC algorithm that integrates VSDM based RBA and RDO, it achieves 9.99% and 14.51% bit rate reduction on average for the high and low bit rate, respectively. It can improve virtual view image quality 0.22 and 0.24 dB on average at the high and low bit rate, respectively, when compared with the original joint multiview video coding model. The RD performance comparisons using five different metrics also validate the effectiveness of the proposed overall algorithm. In addition, the proposed algorithms can be applied to both INTRA and INTER frames.

  15. Molecular evolution of VP3, VP1, 3C(pro) and 3D(pol) coding regions in coxsackievirus group A type 24 variant isolates from acute hemorrhagic conjunctivitis in 2011 in Okinawa, Japan.

    PubMed

    Nidaira, Minoru; Kuba, Yumani; Saitoh, Mika; Taira, Katsuya; Maeshiro, Noriyuki; Mahoe, Yoko; Kyan, Hisako; Takara, Taketoshi; Okano, Sho; Kudaka, Jun; Yoshida, Hiromu; Oishi, Kazunori; Kimura, Hirokazu

    2014-04-01

    A large acute hemorrhagic conjunctivitis (AHC) outbreak occurred in 2011 in Okinawa Prefecture in Japan. Ten strains of coxsackievirus group A type 24 variant (CA24v) were isolated from patients with AHC and full sequence analysis of the VP3, VP1, 3C(pro) and 3D(pol) coding regions performed. To assess time-scale evolution, phylogenetic analysis was performed using the Bayesian Markov chain Monte Carlo method. In addition, similarity plots were constructed and pairwise distance (p-distance) and positive pressure analyses performed. A phylogenetic tree based on the VP1 coding region showed that the present strains belong to genotype 4 (G4). In addition, the present strains could have divided in about 2010 from the same lineages detected in other countries such as China, India and Australia. The mean rates of molecular evolution of four coding regions were estimated at about 6.15 to 7.86 × 10(-3) substitutions/site/year. Similarity plot analyses suggested that nucleotide similarities between the present strains and a prototype strain (EH24/70 strain) were 0.77-0.94. The p-distance of the present strains was relatively short (<0.01). Only one positive selected site (L25H) was identified in the VP1 protein. These findings suggest that the present CA24v strains causing AHC are genetically related to other AHC strains with rapid evolution and emerged in around 2010.

  16. Evaluation of vision training using 3D play game

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun

    2015-03-01

    The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.

  17. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  18. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition

    PubMed Central

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition. PMID:25942404

  19. T-HEMP3D user manual

    SciTech Connect

    Turner, D.

    1983-08-01

    The T-HEMP3D (Transportable HEMP3D) computer program is a derivative of the STEALTH three-dimensional thermodynamics code developed by Science Applications, Inc., under the direction of Ron Hofmann. STEALTH, in turn, is based entirely on the original HEMP3D code written at Lawrence Livermore National Laboratory. The primary advantage STEALTH has over its predecessors is that it was designed using modern structured design techniques, with rigorous programming standards enforced. This yields two benefits. First, the code is easily changeable; this is a necessity for a physics code used for research. The second benefit is that the code is easily transportable between different types of computers. The STEALTH program was transferred to LLNL under a cooperative development agreement. Changes were made primarily in three areas: material specification, coordinate generation, and the addition of sliding surface boundary conditions. The code was renamed T-HEMP3D to avoid confusion with other versions of STEALTH. This document summarizes the input to T-HEMP3D, as used at LLNL. It does not describe the physics simulated by the program, nor the numerical techniques employed. Furthermore, it does not describe the separate job steps of coordinate generation and post-processing, including graphical display of results. (WHK)

  20. NUBEAM developments and 3d halo modeling

    NASA Astrophysics Data System (ADS)

    Gorelenkova, M. V.; Medley, S. S.; Kaye, S. M.

    2012-10-01

    Recent developments related to the 3D halo model in NUBEAM code are described. To have a reliable halo neutral source for diagnostic simulation, the TRANSP/NUBEAM code has been enhanced with full implementation of ADAS atomic physic ground state and excited state data for hydrogenic beams and mixed species plasma targets. The ADAS codes and database provide the density and temperature dependence of the atomic data, and the collective nature of the state excitation process. To be able to populate 3D halo output with sufficient statistical resolution, the capability to control the statistics of fast ion CX modeling and for thermal halo launch has been added to NUBEAM. The 3D halo neutral model is based on modification and extension of the ``beam in box'' aligned 3d Cartesian grid that includes the neutral beam itself, 3D fast neutral densities due to CX of partially slowed down fast ions in the beam halo region, 3D thermal neutral densities due to CX deposition and fast neutral recapture source. More details on the 3D halo simulation design will be presented.

  1. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  2. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  3. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  4. 3-D Finite Element Code Postprocessor

    1996-07-15

    TAURUS is an interactive post-processing application supporting visualization of finite element analysis results on unstructured grids. TAURUS provides the ability to display deformed geometries and contours or fringes of a large number of derived results on meshes consisting of beam, plate, shell, and solid type finite elements. Time history plotting is also available.

  5. JAR3D Webserver: Scoring and aligning RNA loop sequences to known 3D motifs

    PubMed Central

    Roll, James; Zirbel, Craig L.; Sweeney, Blake; Petrov, Anton I.; Leontis, Neocles

    2016-01-01

    Many non-coding RNAs have been identified and may function by forming 2D and 3D structures. RNA hairpin and internal loops are often represented as unstructured on secondary structure diagrams, but RNA 3D structures show that most such loops are structured by non-Watson–Crick basepairs and base stacking. Moreover, different RNA sequences can form the same RNA 3D motif. JAR3D finds possible 3D geometries for hairpin and internal loops by matching loop sequences to motif groups from the RNA 3D Motif Atlas, by exact sequence match when possible, and by probabilistic scoring and edit distance for novel sequences. The scoring gauges the ability of the sequences to form the same pattern of interactions observed in 3D structures of the motif. The JAR3D webserver at http://rna.bgsu.edu/jar3d/ takes one or many sequences of a single loop as input, or else one or many sequences of longer RNAs with multiple loops. Each sequence is scored against all current motif groups. The output shows the ten best-matching motif groups. Users can align input sequences to each of the motif groups found by JAR3D. JAR3D will be updated with every release of the RNA 3D Motif Atlas, and so its performance is expected to improve over time. PMID:27235417

  6. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  7. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  8. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  9. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  10. Perception of detail in 3D images

    NASA Astrophysics Data System (ADS)

    Heynderickx, Ingrid; Kaptein, Ronald

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads to blurring or ghosting, and therefore to a decrease in perceived sharpness. However, people watching stereoscopic videos have reported that the 3D scene contained more details, compared to the 2D scene with identical spatial resolution. This is an interesting notion, that has never been tested in a systematic and quantitative way. To investigate this effect, we had people compare the amount of detail ("detailedness") in pairs of 2D and 3D images. A blur filter was applied to one of the two images, and the blur level was varied using an adaptive staircase procedure. In this way, the blur threshold for which the 2D and 3D image contained perceptually the same amount of detail could be found. Our results show that the 3D image needed to be blurred more than the 2D image. This confirms the earlier qualitative findings that 3D images contain perceptually more details than 2D images with the same spatial resolution.

  11. Spatial watermarking of 3D triangle meshes

    NASA Astrophysics Data System (ADS)

    Cayre, Francois; Macq, Benoit M. M.

    2001-12-01

    Although it is obvious that watermarking has become of great interest in protecting audio, videos, and still pictures, few work has been done considering 3D meshes. We propose a new method for watermarking 3D triangle meshes. This method embeds the watermark as triangles deformations. The list of watermarked triangles is obtained through a similar way to the one used in the TSPS (Triangle Strip Peeling Sequence) method. Unlike TSPS, our method is automatic and more secure. We also show that it is reversible.

  12. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  13. Shim3d Helmholtz Solution Package

    2009-01-29

    This suite of codes solves the Helmholtz Equation for the steady-state propagation of single-frequency electromagnetic radiation in an arbitrary 2D or 3D dielectric medium. Materials can be either transparent or absorptive (including metals) and are described entirely by their shape and complex dielectric constant. Dielectric boundaries are assumed to always fall on grid boundaries and the material within a single grid cell is considered to be uniform. Input to the problem is in the formmore » of a Dirichlet boundary condition on a single boundary, and may be either analytic (Gaussian) in shape, or a mode shape computed using a separate code (such as the included eigenmode solver vwave20), and written to a file. Solution is via the finite difference method using Jacobi iteration for 3D problems or direct matrix inversion for 2D problems. Note that 3D problems that include metals will require different iteration parameters than described in the above reference. For structures with curved boundaries not easily modeled on a rectangular grid, the auxillary codes helmholtz11(2D), helm3d (semivectoral), and helmv3d (full vectoral) are provided. For these codes the finite difference equations are specified on a topological regular triangular grid and solved using Jacobi iteration or direct matrix inversion as before. An automatic grid generator is supplied.« less

  14. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  15. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  16. Real-time depth map manipulation for 3D visualization

    NASA Astrophysics Data System (ADS)

    Ideses, Ianir; Fishbain, Barak; Yaroslavsky, Leonid

    2009-02-01

    One of the key aspects of 3D visualization is computation of depth maps. Depth maps enables synthesis of 3D video from 2D video and use of multi-view displays. Depth maps can be acquired in several ways. One method is to measure the real 3D properties of the scene objects. Other methods rely on using two cameras and computing the correspondence for each pixel. Once a depth map is acquired for every frame, it can be used to construct its artificial stereo pair. There are many known methods for computing the optical flow between adjacent video frames. The drawback of these methods is that they require extensive computation power and are not very well suited to high quality real-time 3D rendering. One efficient method for computing depth maps is extraction of motion vector information from standard video encoders. In this paper we present methods to improve the 3D visualization quality acquired from compression CODECS by spatial/temporal and logical operations and manipulations. We show how an efficient real time implementation of spatial-temporal local order statistics such as median and local adaptive filtering in 3D-DCT domain can substantially improve the quality of depth maps and consequently 3D video while retaining real-time rendering. Real-time performance is achived by utilizing multi-core technology using standard parallelization algorithms and libraries (OpenMP, IPP).

  17. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    PubMed

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  18. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    PubMed

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development.

  19. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    PubMed Central

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  20. Restructuring of RELAP5-3D

    SciTech Connect

    George Mesina; Joshua Hykes

    2005-09-01

    The RELAP5-3D source code is unstructured with many interwoven logic flow paths. By restructuring the code, it becomes easier to read and understand, which reduces the time and money required for code development, debugging, and maintenance. A structured program is comprised of blocks of code with one entry and exit point and downward logic flow. IF tests and DO loops inherently create structured code, while GOTO statements are the main cause of unstructured code. FOR_STRUCT is a commercial software package that converts unstructured FORTRAN into structured programming; it was used to restructure individual subroutines. Primarily it transforms GOTO statements, ARITHMETIC IF statements, and COMPUTED GOTO statements into IF-ELSEIF-ELSE tests and DO loops. The complexity of RELAP5-3D complicated the task. First, FOR_STRUCT cannot completely restructure all the complex coding contained in RELAP5-3D. An iterative approach of multiple FOR_STRUCT applications gave some additional improvements. Second, FOR_STRUCT cannot restructure FORTRAN 90 coding, and RELAP5-3D is partially written in FORTRAN 90. Unix scripts for pre-processing subroutines into coding that FOR_STRUCT could handle and post-processing it back into FORTRAN 90 were written. Finally, FOR_STRUCT does not have the ability to restructure the RELAP5-3D code which contains pre-compiler directives. Variations of a file were processed with different pre-compiler options switched on or off, ensuring that every block of code was restructured. Then the variations were recombined to create a completely restructured source file. Unix scripts were written to perform these tasks, as well as to make some minor formatting improvements. In total, 447 files comprising some 180,000 lines of FORTRAN code were restructured. These showed significant reduction in the number of logic jumps contained as measured by reduction in the number of GOTO statements and line labels. The average number of GOTO statements per subroutine

  1. Video coding with fixed-length packetization for a tandem channel.

    PubMed

    Shen, Yushi; Cosman, Pamela C; Milstein, Laurence B

    2006-02-01

    A robust scheme is presented for the efficient transmission of packet video over a tandem wireless Internet channel. This channel is assumed to have bit errors (due to noise and fading on the wireless portion of the channel) and packet erasures (due to congestion on the wired portion). First, we propose an algorithm to optimally switch between intracoding and intercoding for a video coder that operates on a packet-switched network with fixed-length packets. Different re-synchronization schemes are considered and compared. This optimal mode selection algorithm is integrated with an efficient channel encoder, a cyclic redundancy check outer coder concatenated with an inner rate-compatible punctured convolutional coder. The system performance is both analyzed and simulated. Last, the framework is extended to operate on a time-varying wireless Internet channel with feedback information from the receiver. Both instantaneous feedback and delayed feedback are evaluated, and an improved method of refined distortion estimation for encoding is presented and simulated for the case of delayed feedback.

  2. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  3. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  4. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  5. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  6. Distributed video coding for arrays of remote sensing nodes : final report.

    SciTech Connect

    Mecimore, Ivan; Creusere, Chuck D.; Merchant, Bion John

    2010-06-01

    This document is the final report for the Sandia National Laboratory funded Student Fellowship position at New Mexico State University (NMSU) from 2008 to 2010. Ivan Mecimore, the PhD student in Electrical Engineering at NMSU, was conducting research into image and video processing techniques to identify features and correlations within images without requiring the decoding of the data compression. Such an analysis technique would operate on the encoded bit stream, potentially saving considerable processing time when operating on a platform that has limited computational resources. Unfortunately, the student has elected in mid-year not to continue with his research or the fellowship position. The student is unavailable to provide any details of his research for inclusion in this final report. As such, this final report serves solely to document the information provided in the previous end of year summary.

  7. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  8. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  9. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  10. Sparse approximation using M-term pursuit and application in image and video coding.

    PubMed

    Rahmoune, Adel; Vandergheynst, Pierre; Frossard, Pascal

    2012-04-01

    This paper introduces a novel algorithm for sparse approximation in redundant dictionaries called the M-term pursuit (MTP). This algorithm decomposes a signal into a linear combination of atoms that are selected in order to represent the main signal components. The MTP algorithm provides an adaptive representation for signals in any complete dictionary. The basic idea behind the MTP is to partition the dictionary into L quasi-disjoint subdictionaries. A k-term signal approximation is then iteratively computed, where each iteration leads to the selection of M ≤ L atoms based on thresholding. The MTP algorithm is shown to achieve competitive performance with the matching pursuit (MP) algorithm that greedily selects atoms one by one. This is due to efficient partitioning of the dictionary. At the same time, the computational complexity is dramatically reduced compared to MP due to the batch selection of atoms. We finally illustrate the performance of MTP in image and video compression applications, where we show that the suboptimal atom selection of MTP is largely compensated by the reduction in complexity compared with MP.

  11. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  12. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  13. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  14. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  15. ON THE RELIABILITY OF ZEUS-3D

    SciTech Connect

    Clarke, David A.

    2010-03-01

    Recent and not-so-recent critiques of the widely used magnetohydrodynamics (MHD) code, ZEUS-3D, challenge its reliability and efficiency suggesting that its MHD algorithm is capable of 'significant errors' in some simple one-dimensional shock-tube problems. I show that these concerns are either inapplicable in multi-dimensional astrophysical applications, or result from a misuse of the code rather than 'flaws' in its design. I also describe a few multi-dimensional test problems including one for super-Alfvenic turbulence, and highlight some recent innovations and improvements to the code now available online.

  16. BEAMS3D Neutral Beam Injection Model

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Lazerson, Samuel A.

    2014-09-01

    With the advent of applied 3D fields in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous slowing down, and pitch angle scattering are modeled with the ADAS atomic physics database. Elementary benchmark calculations are presented to verify the collisionless particle orbits, NBI model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields. Notice: this manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  17. Superplastic forming using NIKE3D

    SciTech Connect

    Puso, M.

    1996-12-04

    The superplastic forming process requires careful control of strain rates in order to avoid strain localizations. A load scheduler was developed and implemented into the nonlinear finite element code NIKE3D to provide strain rate control during forming simulation and process schedule output. Often the sheets being formed in SPF are very thin such that less expensive membrane elements can be used as opposed to shell elements. A large strain membrane element was implemented into NIKE3D to assist in SPF process modeling.