Science.gov

Sample records for 3d video content

  1. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  2. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary. PMID:22745004

  3. Stereoscopic 3D video games and their effects on engagement

    NASA Astrophysics Data System (ADS)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  4. Multirate 3-D subband coding of video.

    PubMed

    Taubman, D; Zakhor, A

    1994-01-01

    We propose a full color video compression strategy, based on 3-D subband coding with camera pan compensation, to generate a single embedded bit stream supporting multiple decoder display formats and a wide, finely gradated range of bit rates. An experimental implementation of our algorithm produces a single bit stream, from which suitable subsets are extracted to be compatible with many decoder frame sizes and frame rates and to satisfy transmission bandwidth constraints ranging from several tens of kilobits per second to several megabits per second. Reconstructed video quality from any of these bit stream subsets is often found to exceed that obtained from an MPEG-1 implementation, operated with equivalent bit rate constraints, in both perceptual quality and mean squared error. In addition, when restricted to 2-D, the algorithm produces some of the best results available in still image compression. PMID:18291953

  5. View synthesis techniques for 3D video

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Lai, Po-Lin; Lopez, Patrick; Gomila, Cristina

    2009-08-01

    To facilitate new video applications such as three-dimensional video (3DV) and free-viewpoint video (FVV), multiple view plus depth format (MVD), which consists of both video views and the corresponding per-pixel depth images, is being investigated. Virtual views can be generated using depth image based rendering (DIBR), which takes video and the corresponding depth images as input. This paper discusses view synthesis techniques based on DIBR, which includes forward warping, blending and hole filling. Especially, we will emphasize on the techniques brought to the MPEG view synthesis reference software (VSRS). Unlike the case in the field of computer graphics, the ground truth depth images for nature content are very difficult to obtain. The estimated depth images used for view synthesis typically contain different types of noises. Some robust synthesis modes to combat against the depth errors are also presented in this paper. In addition, we briefly discuss how to use synthesis techniques with minor modifications to generate the occlusion layer information for layered depth video (LDV) data, which is another potential format for 3DV applications.

  6. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  7. Stereoscopic contents authoring system for 3D DMB data service

    NASA Astrophysics Data System (ADS)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  8. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611

  9. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  10. 3D video coding: an overview of present and upcoming standards

    NASA Astrophysics Data System (ADS)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  11. A modular cross-platform GPU-based approach for flexible 3D video playback

    NASA Astrophysics Data System (ADS)

    Olsson, Roger; Andersson, Håkan; Sjöström, Mårten

    2011-03-01

    Different compression formats for stereo and multiview based 3D video is being standardized and software players capable of decoding and presenting these formats onto different display types is a vital part in the commercialization and evolution of 3D video. However, the number of publicly available software video players capable of decoding and playing multiview 3D video is still quite limited. This paper describes the design and implementation of a GPU-based real-time 3D video playback solution, built on top of cross-platform, open source libraries for video decoding and hardware accelerated graphics. A software architecture is presented that efficiently process and presents high definition 3D video in real-time and in a flexible manner support both current 3D video formats and emerging standards. Moreover, a set of bottlenecks in the processing of 3D video content in a GPU-based real-time 3D video playback solution is identified and discussed.

  12. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  13. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  14. 3D video sequence reconstruction algorithms implemented on a DSP

    NASA Astrophysics Data System (ADS)

    Ponomaryov, V. I.; Ramos-Diaz, E.

    2011-03-01

    A novel approach for 3D image and video reconstruction is proposed and implemented. This is based on the wavelet atomic functions (WAF) that have demonstrated better approximation properties in different processing problems in comparison with classical wavelets. Disparity maps using WAF are formed, and then they are employed in order to present 3D visualization using color anaglyphs. Additionally, the compression via Pth law is performed to improve the disparity map quality. Other approaches such as optical flow and stereo matching algorithm are also implemented as the comparative approaches. Numerous simulation results have justified the efficiency of the novel framework. The implementation of the proposed algorithm on the Texas Instruments DSP TMS320DM642 permits to demonstrate possible real time processing mode during 3D video reconstruction for images and video sequences.

  15. GPU-based 3D lower tree wavelet video encoder

    NASA Astrophysics Data System (ADS)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Drummond, Leroy Anthony; Migallón, Hector

    2013-12-01

    The 3D-DWT is a mathematical tool of increasing importance in those applications that require an efficient processing of huge amounts of volumetric info. Other applications like professional video editing, video surveillance applications, multi-spectral satellite imaging, HQ video delivery, etc, would rather use 3D-DWT encoders to reconstruct a frame as fast as possible. In this article, we introduce a fast GPU-based encoder which uses 3D-DWT transform and lower trees. Also, we present an exhaustive analysis of the use of GPU memory. Our proposal shows good trade off between R/D, coding delay (as fast as MPEG-2 for High definition) and memory requirements (up to 6 times less memory than x264).

  16. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    PubMed Central

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  17. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps.

    PubMed

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  18. Virtual view adaptation for 3D multiview video streaming

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Do, Luat; Zinger, Sveta; de With, Peter H. N.

    2010-02-01

    Virtual views in 3D-TV and multi-view video systems are reconstructed images of the scene generated synthetically from the original views. In this paper, we analyze the performance of streaming virtual views over IP-networks with a limited and time-varying available bandwidth. We show that the average video quality perceived by the user can be improved with an adaptive streaming strategy aiming at maximizing the average video quality. Our adaptive 3D multi-view streaming can provide a quality improvement of 2 dB on the average - over non-adaptive streaming. We demonstrate that an optimized virtual view adaptation algorithm needs to be view-dependent and achieve an improvement of up to 0.7 dB. We analyze our adaptation strategies under dynamic available bandwidth in the network.

  19. Development of 3D mobile receiver for stereoscopic video and data service in T-DMB

    NASA Astrophysics Data System (ADS)

    Lee, Gwangsoon; Lee, Hyun; Yun, Kugjin; Hur, Namho; Lee, Soo In

    2011-02-01

    In this paper, we present a development of 3D-T DMB (three-dimensional digital multimedia broadcasting) receiver for providing 3D video and data service. First, for a 3D video service, the developed receiver is capable of decoding and playing 3D AV contents that is encoded by simulcast encoding method and that is transmitted via T-DMB network. Second, the developed receiver can render stereoscopic multimedia objects delivered using MPEG-4 BIFS technology that is also employed in T-DMB. Specially, this paper introduces hardware and software architecture and its implementation of 3D T-DMB receiver. The developed 3D T-DMB receiver has capabilities of generating stereoscopic viewing on the glasses-free 3D mobile display, therefore we propose parameters for designing the 3D display, together with evaluating the viewing angle and distance through both computer simulation and actual measurement. Finally, the availability of 3D video and data service is verified using the experimental system including the implemented receiver and a variety of service examples.

  20. The Emerging MVC Standard for 3D Video Services

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Wang, Ye-Kui; Ugur, Kemal; Hannuksela, Miska M.; Lainema, Jani; Gabbouj, Moncef

    2008-12-01

    Multiview video has gained a wide interest recently. The huge amount of data needed to be processed by multiview applications is a heavy burden for both transmission and decoding. The joint video team has recently devoted part of its effort to extend the widely deployed H.264/AVC standard to handle multiview video coding (MVC). The MVC extension of H.264/AVC includes a number of new techniques for improved coding efficiency, reduced decoding complexity, and new functionalities for multiview operations. MVC takes advantage of some of the interfaces and transport mechanisms introduced for the scalable video coding (SVC) extension of H.264/AVC, but the system level integration of MVC is conceptually more challenging as the decoder output may contain more than one view and can consist of any combination of the views with any temporal level. The generation of all the output views also requires careful consideration and control of the available decoder resources. In this paper, multiview applications and solutions to support generic multiview as well as 3D services are introduced. The proposed solutions, which have been adopted to the draft MVC specification, cover a wide range of requirements for 3D video related to interface, transport of the MVC bitstreams, and MVC decoder resource management. The features that have been introduced in MVC to support these solutions include marking of reference pictures, supporting for efficient view switching, structuring of the bitstream, signalling of view scalability supplemental enhancement information (SEI) and parallel decoding SEI.

  1. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  2. Saliency detection for videos using 3D FFT local spectra

    NASA Astrophysics Data System (ADS)

    Long, Zhiling; AlRegib, Ghassan

    2015-03-01

    Bottom-up spatio-temporal saliency detection identifies perceptually important regions of interest in video sequences. The center-surround model proves to be useful for visual saliency detection. In this work, we explore using 3D FFT local spectra as features for saliency detection within the center-surround framework. We develop a spectral location based decomposition scheme to divide a 3D FFT cube into two components, one related to temporal changes and the other related to spatial changes. Temporal saliency and spatial saliency are detected separately using features derived from each spectral component through a simple center-surround comparison method. The two detection results are then combined to yield a saliency map. We apply the same detection algorithm to different color channels (YIQ) and incorporate the results into the final saliency determination. The proposed technique is tested with the public CRCNS database. Both visual and numerical evaluations verify the promising performance of our technique.

  3. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  4. Video coding and transmission standards for 3D television — a survey

    NASA Astrophysics Data System (ADS)

    Buchowicz, A.

    2013-03-01

    The emerging 3D television systems require effective techniques for transmission and storage of data representing a 3-D scene. The 3-D scene representations based on multiple video sequences or multiple views plus depth maps are especially important since they can be processed with existing video technologies. The review of the video coding and transmission techniques is presented in this paper.

  5. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  6. On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV

    NASA Astrophysics Data System (ADS)

    Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.

    2011-03-01

    Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.

  7. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  8. Holovideo: Real-time 3D range video encoding and decoding on GPU

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  9. A new video codec based on 3D-DTCWT and vector SPIHT

    NASA Astrophysics Data System (ADS)

    Xu, Ruiping; Li, Huifang; Xie, Sunyun

    2011-10-01

    In this paper, a new video coding system combining 3-D complex dual-tree discrete wavelet transform with vector SPIHT and arithmetic coding is proposed, and tested on standard video sequences. First the 3-D DTCWT of each color component is performed for video sequences. Then the wavelet coefficients are grouped to form vector, and successive refinement vector quantization techniques is used to quantize the groups. Finally experimental results are given. It shows that the proposed video codec provides better performance than the 3D-DTCWT and 3D-SPIHT codec, and the superior performance for the proposed sheme lies in not performing motion compensation.

  10. Coarse integral holography approach for real 3D color video displays.

    PubMed

    Chen, J S; Smithwick, Q Y J; Chu, D P

    2016-03-21

    A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components. PMID:27136858

  11. The future of 3D and video coding in mobile and the internet

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  12. 3-D model-based frame interpolation for distributed video coding of static scenes.

    PubMed

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content. PMID:17491456

  13. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  14. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  15. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  16. Standards-based approaches to 3D and multiview video coding

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.

    2009-08-01

    The extension of video applications to enable 3D perception, which typically is considered to include a stereo viewing experience, is emerging as a mass market phenomenon, as is evident from the recent prevalence of 3D major cinema title releases. For high quality 3D video to become a commonplace user experience beyond limited cinema distribution, adoption of an interoperable coded 3D digital video format will be needed. Stereo-view video can also be studied as a special case of the more general technologies of multiview and "free-viewpoint" video systems. The history of standardization work on this topic is actually richer than people may typically realize. The ISO/IEC Moving Picture Experts Group (MPEG), in particular, has been developing interoperability standards to specify various such coding schemes since the advent of digital video as we know it. More recently, the ITU-T Visual Coding Experts Group (VCEG) has been involved as well in the Joint Video Team (JVT) work on development of 3D features for H.264/14496-10 Advanced Video Coding, including Multiview Video Coding (MVC) extensions. This paper surveys the prior, ongoing, and anticipated future standardization efforts on this subject to provide an overview and historical perspective on feasible approaches to 3D and multiview video coding.

  17. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  18. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  19. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  20. Transferring of speech movements from video to 3D face space.

    PubMed

    Pei, Yuru; Zha, Hongbin

    2007-01-01

    We present a novel method for transferring speech animation recorded in low quality videos to high resolution 3D face models. The basic idea is to synthesize the animated faces by an interpolation based on a small set of 3D key face shapes which span a 3D face space. The 3D key shapes are extracted by an unsupervised learning process in 2D video space to form a set of 2D visemes which are then mapped to the 3D face space. The learning process consists of two main phases: 1) Isomap-based nonlinear dimensionality reduction to embed the video speech movements into a low-dimensional manifold and 2) K-means clustering in the low-dimensional space to extract 2D key viseme frames. Our main contribution is that we use the Isomap-based learning method to extract intrinsic geometry of the speech video space and thus to make it possible to define the 3D key viseme shapes. To do so, we need only to capture a limited number of 3D key face models by using a general 3D scanner. Moreover, we also develop a skull movement recovery method based on simple anatomical structures to enhance 3D realism in local mouth movements. Experimental results show that our method can achieve realistic 3D animation effects with a small number of 3D key face models. PMID:17093336

  1. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  2. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models.

    PubMed

    Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram

    2016-01-01

    BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation. PMID:26978075

  3. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models

    PubMed Central

    Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram

    2016-01-01

    BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation. PMID:26978075

  4. 3D reconstruction of rotational video microscope based on patches

    NASA Astrophysics Data System (ADS)

    Ma, Shijie; Qu, Yufu

    2015-11-01

    Due to the small field of view and shallow depth of field, the microscope could only capture 2D images of the object. In order to observe the three-dimensional structure of the micro object, a microscopy images reconstruction algorithm based on an improved patch-based multi-view stereo (PMVS) algorithm is proposed. The new algorithm improves PMVS from two aspects: first, increasing the propagation directions, second, on the basis of the expansion, different expansion radius and times are set by the angle between the normal vector of the seed patch and the direction vector of the line passing through the seed patch center and the camera center. Compared with PMVS, the number of 3D points made by the new algorithm is three times as much as PMVS. And the holes in the vertical side are also eliminated.

  5. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  6. Does training with 3D videos improve decision-making in team invasion sports?

    PubMed

    Hohmann, Tanja; Obelöer, Hilke; Schlapkohl, Nele; Raab, Markus

    2016-04-01

    We examined the effectiveness of video-based decision training in national youth handball teams. Extending previous research, we tested in Study 1 whether a three-dimensional (3D) video training group would outperform a two-dimensional (2D) group. In Study 2, a 3D training group was compared to a control group and a group trained with a traditional tactic board. In both studies, training duration was 6 weeks. Performance was measured in a pre- to post-retention design. The tests consisted of a decision-making task measuring quality of decisions (first and best option) and decision time (time for first and best option). The results of Study 1 showed learning effects and revealed that the 3D video group made faster first-option choices than the 2D group, but differences in the quality of options were not pronounced. The results of Study 2 revealed learning effects for both training groups compared to the control group, and faster choices in the 3D group compared to both other groups. Together, the results show that 3D video training is the most useful tool for improving choices in handball, but only in reference to decision time and not decision quality. We discuss the usefulness of a 3D video tool for training of decision-making skills outside the laboratory or gym. PMID:26207956

  7. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  8. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  9. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  10. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  11. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  12. A 3D-Video-Based Computerized Analysis of Social and Sexual Interactions in Rats

    PubMed Central

    Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238

  13. A 3D-video-based computerized analysis of social and sexual interactions in rats.

    PubMed

    Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238

  14. 3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading

    PubMed Central

    2011-01-01

    Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material

  15. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  16. Effect of 3D animation videos over 2D video projections in periodontal health education among dental students

    PubMed Central

    Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha

    2015-01-01

    Background: There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. Materials and Methods: 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. Results: A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p<0.001) at baseline, immediate and after 1 month. At baseline, all the clinical parameters in the both the groups were similar and showed a significant reduction (p<0.001)p after 1 month, whereas no significant difference was noticed post intervention between the groups. Conclusion: 3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes. PMID:26759805

  17. Rapid 3D video/laser sensing and digital archiving with immediate on-scene feedback for 3D crime scene/mass disaster data collection and reconstruction

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.

    1996-02-01

    We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.

  18. Video quality assessment for web content mirroring

    NASA Astrophysics Data System (ADS)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  19. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  20. 3D filtering technique in presence of additive noise in color videos implemented on DSP

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Palacios, Alfredo

    2014-05-01

    A filtering method for color videos contaminated by additive noise is presented. The proposed framework employs three filtering stages: spatial similarity filtering, neighboring frame denoising, and spatial post-processing smoothing. The difference with other state-of- the-art filtering methods, is that this approach, based on fuzzy logic, analyses basic and related gradient values between neighboring pixels into a 7 fi 7 sliding window in the vicinity of a central pixel in each of the RGB channels. Following, the similarity measures between the analogous pixels in the color bands are taken into account during the denoising. Next, two neighboring video frames are analyzed together estimating local motions between the frames using block matching procedure. In the final stage, the edges and smoothed areas are processed differently in a current frame during the post-processing filtering. Numerous simulations results confirm that this 3D fuzzy filter perform better than other state-of-the- art methods, such as: 3D-LLMMSE, WMVCE, RFMDAF, FDARTF G, VBM3D and NLM, in terms of objective criteria (PSNR, MAE, NCD and SSIM) as well as subjective perception via human vision system in the different color videos. An efficiency analysis of the designed and other mentioned filters have been performed on the DSPs TMS320 DM642 and TMS320DM648 by Texas Instruments through MATLAB and Simulink module showing that the novel 3D fuzzy filter can be used in real-time processing applications.

  1. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  2. Video reframing relying on panoramic estimation based on a 3D representation of the scene

    NASA Astrophysics Data System (ADS)

    de Simon, Agnes; Figue, Jean; Nicolas, Henri

    2000-05-01

    This paper describes a new method for creating mosaic images from an original video and for computing a new sequence modifying some camera parameters like image size, scale factor, view angle... A mosaic image is a representation of the full scene observed by a moving camera during its displacement. It provides a wide angle of view of the scene from a sequence of images shot with a narrow angle of view camera. This paper proposes a method to create a virtual sequence from a calibrated original video and a rough 3D model of the scene. A 3D relationship between original and virtual images gives pixel correspondent in different images for a same 3D point in model scene. To texture the model with natural textures obtained in the original sequence, a criterion based on constraints related to the temporal variations of the background and 3D geometric considerations is used. Finally, in the presented method, the textured 3D model is used to recompute a new sequence of image with possibly different point of view and camera aperture angle. The algorithm is being proven with virtual sequences and, obtained results are encouraging up to now.

  3. 3D MPEG-2 video transmission over broadband network and broadcast channels

    NASA Astrophysics Data System (ADS)

    Gagnon, Gilles; Subramaniam, Suganthan; Vincent, Andre

    2001-06-01

    This paper explores the transmission of MPEG-2 compressed stereoscopic (3-D) video over broadband networks and digital television (DTV) broadcast channels. A system has been developed to perform 3-D (stereoscopic) MPEG-2 video encoding, transmission and decoding over broadband networks in real- time. Such a system can benefit applications where a depiction of the relative positions of objects in 3-dimensional space is critical, by providing visual cues along the sight axis. Applications such as tele-medicine, remote surveillance, tele- education, entertainment and others could benefit from such a system since it conveys an added viewing experience. For simplicity and cost efficiency the system is kept as simple as possible while offering a certain degree of control over the encoding and decoding platforms. Data exchange is done with TCP/IP for control between the server and client and with UDP/IP for the MPEG-2 transport streams delivered to the client. Parameters such as encoding rate can be set independently for the left and right viewing channels to satisfy network bandwidth restrictions, while maintaining satisfactory quality. Using this system, transmission of stereoscopic MPEG-2 transport streams (video and audio) has been performed over a 155 Mbps ATM network shared with other video transactions between server and clients. Preliminary results have shown that the system is reasonably robust to network impairments making it useable in relatively loaded networks. An innovative technique for broadcasting Standard Definition Television 3-D video using an ATSC compatible encoding and broadcasting system is also presented. This technique requires a simple video multiplexer before the ATSC encoding process, and a slight modification at the receiver after the ATSC decoding.

  4. Content-aware objective video quality assessment

    NASA Astrophysics Data System (ADS)

    Ortiz-Jaramillo, Benhur; Niño-Castañeda, Jorge; Platiša, Ljiljana; Philips, Wilfried

    2016-01-01

    Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20% relative to its noncontent-aware baseline.

  5. A perceptual quality metric for high-definition stereoscopic 3D video

    NASA Astrophysics Data System (ADS)

    Battisti, F.; Carli, M.; Stramacci, A.; Boev, A.; Gotchev, A.

    2015-03-01

    The use of 3D video is growing in several fields such as entertainment, military simulations, medical applications. However, the process of recording, transmitting, and processing 3D video is prone to errors thus producing artifacts that may affect the perceived quality. Nowadays a challenging task is the definition of a new metric able to predict the perceived quality with low computational complexity in order to be used in real-time applications. The research in this field is very active due to the complexity of the analysis of the influence of stereoscopic cues. In this paper we present a novel stereoscopic metric based on the combination of relevant features able to predict the subjective quality rating in a more accurate way.

  6. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity. PMID:24991752

  7. Video lensfree microscopy of 2D and 3D culture of cells

    NASA Astrophysics Data System (ADS)

    Allier, C. P.; Vinjimore Kesavan, S.; Coutard, J.-G.; Cioni, O.; Momey, F.; Navarro, F.; Menneteau, M.; Chalmond, B.; Obeid, P.; Haguet, V.; David-Watine, B.; Dubrulle, N.; Shorte, S.; van der Sanden, B.; Di Natale, C.; Hamard, L.; Wion, D.; Dolega, M. E.; Picollet-D'hahan, N.; Gidrol, X.; Dinten, J.-M.

    2014-03-01

    Innovative imaging methods are continuously developed to investigate the function of biological systems at the microscopic scale. As an alternative to advanced cell microscopy techniques, we are developing lensfree video microscopy that opens new ranges of capabilities, in particular at the mesoscopic level. Lensfree video microscopy allows the observation of a cell culture in an incubator over a very large field of view (24 mm2) for extended periods of time. As a result, a large set of comprehensive data can be gathered with strong statistics, both in space and time. Video lensfree microscopy can capture images of cells cultured in various physical environments. We emphasize on two different case studies: the quantitative analysis of the spontaneous network formation of HUVEC endothelial cells, and by coupling lensfree microscopy with 3D cell culture in the study of epithelial tissue morphogenesis. In summary, we demonstrate that lensfree video microscopy is a powerful tool to conduct cell assays in 2D and 3D culture experiments. The applications are in the realms of fundamental biology, tissue regeneration, drug development and toxicology studies.

  8. Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Vienne, Cyril; Blondé, Laurent

    2013-03-01

    Visual attention is an inherent mechanism that plays an important role in the human visual perception. As our visual system has limited capacity and cannot efficiently process the information from the entire visual field, we focus our attention on specific areas of interest in the image for detailed analysis of these areas. In the context of media entertainment, the viewers' visual attention deployment is also influenced by the art of visual storytelling. To this date, visual editing and composition of scenes in stereoscopic 3D content creation still mostly follows those used in 2D. In particular, out-of-focus blur is often used in 2D motion pictures and photography to drive the viewer's attention towards a sharp area of the image. In this paper, we study specifically the impact of defocused foreground objects on visual attention deployment in stereoscopic 3D content. For that purpose, we conducted a subjective experiment using an eyetracker. Our results bring more insights on the deployment of visual attention in stereoscopic 3D content viewing, and provide further understanding on visual attention behavior differences between 2D and 3D. Our results show that a traditional 2D scene compositing approach such as the use of foreground blur does not necessarily produce the same effect on visual attention deployment in 2D and 3D. Implications for stereoscopic content creation and visual fatigue are discussed.

  9. Rate-constrained 3D surface estimation from noise-corrupted multiview depth videos.

    PubMed

    Sun, Wenxiu; Cheung, Gene; Chou, Philip A; Florencio, Dinei; Zhang, Cha; Au, Oscar C

    2014-07-01

    Transmitting compactly represented geometry of a dynamic 3D scene from a sender can enable a multitude of imaging functionalities at a receiver, such as synthesis of virtual images at freely chosen viewpoints via depth-image-based rendering. While depth maps—projections of 3D geometry onto 2D image planes at chosen camera viewpoints-can nowadays be readily captured by inexpensive depth sensors, they are often corrupted by non-negligible acquisition noise. Given depth maps need to be denoised and compressed at the encoder for efficient network transmission to the decoder, in this paper, we consider the denoising and compression problems jointly, arguing that doing so will result in a better overall performance than the alternative of solving the two problems separately in two stages. Specifically, we formulate a rate-constrained estimation problem, where given a set of observed noise-corrupted depth maps, the most probable (maximum a posteriori (MAP)) 3D surface is sought within a search space of surfaces with representation size no larger than a prespecified rate constraint. Our rate-constrained MAP solution reduces to the conventional unconstrained MAP 3D surface reconstruction solution if the rate constraint is loose. To solve our posed rate-constrained estimation problem, we propose an iterative algorithm, where in each iteration the structure (object boundaries) and the texture (surfaces within the object boundaries) of the depth maps are optimized alternately. Using the MVC codec for compression of multiview depth video and MPEG free viewpoint video sequences as input, experimental results show that rate-constrained estimated 3D surfaces computed by our algorithm can reduce coding rate of depth maps by up to 32% compared with unconstrained estimated surfaces for the same quality of synthesized virtual views at the decoder. PMID:24876124

  10. ROI-preserving 3D video compression method utilizing depth information

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan

    2015-09-01

    Efficiently transmitting the extra information of three dimensional (3D) video is becoming a key issue of the development of 3DTV. 2D plus depth format not only occupies the smaller bandwidth and is compatible transmission under the condition of the existing channel, but also can provide technique support for advanced 3D video compression in some extend. This paper proposes an ROI-preserving compression scheme to further improve the visual quality at a limited bit rate. According to the connection between the focus of Human Visual System (HVS) and depth information, region of interest (ROI) can be automatically selected via depth map progressing. The main improvement from common method is that a meanshift based segmentation is executed to the depth map before foreground ROI selection to keep the integrity of scene. Besides, the sensitive areas along the edges are also protected. The Spatio-temporal filtering adapting to H.264 is used to the non-ROI of both 2D video and depth map before compression. Experiments indicate that, the ROI extracted by this method is more undamaged and according with subjective feeling, and the proposed method can keep the key high-frequency information more effectively while the bit rate is reduced.

  11. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  12. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  13. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  14. Depth-based representations: Which coding format for 3D video broadcast applications?

    NASA Astrophysics Data System (ADS)

    Kerbiriou, Paul; Boisson, Guillaume; Sidibé, Korian; Huynh-Thu, Quan

    2011-03-01

    3D Video (3DV) delivery standardization is currently ongoing in MPEG. Now time is to choose 3DV data representation format. What is at stake is the final quality for end-users, i.e. synthesized views' visual quality. We focus on two major rival depth-based formats, namely Multiview Video plus Depth (MVD) and Layered Depth Video (LDV). MVD can be considered as the basic depth-based 3DV format, generated by disparity estimation from multiview sequences. LDV is more sophisticated, with the compaction of multiview data into color- and depth-occlusions layers. We compare final views quality using MVD2 and LDV (both containing two color channels plus two depth components) coded with MVC at various compression ratios. Depending on the format, the appropriate synthesis process is performed to generate final stereoscopic pairs. Comparisons are provided in terms of SSIM and PSNR with respect to original views and to synthesized references (obtained without compression). Eventually, LDV outperforms significantly MVD when using state-of-the-art reference synthesis algorithms. Occlusions management before encoding is advantageous in comparison with handling redundant signals at decoder side. Besides, we observe that depth quantization does not induce much loss on the final view quality until a significant degradation level. Improvements in disparity estimation and view synthesis algorithms are therefore still expected during the remaining standardization steps.

  15. Monitoring an eruption fissure in 3D: video recording, particle image velocimetry and dynamics

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2015-04-01

    The processes during an eruption are very complex. To get a better understanding several parameters are measured. One of the measured parameters is the velocity of particles and patterns, as ash and emitted magma, and of the volcano itself. The resulting velocity field provides insights into the dynamics of a vent. Here we test our algorithm for 3 dimensional velocity fields on videos of the second fissure eruption of Bárdarbunga 2014. There we acquired videos from lava fountains of the main fissure with 2 high speed cameras with small angles between the cameras. Additionally we test the algorithm on videos from the geyser Strokkur, where we had 3 cameras and larger angles between the cameras. The velocity is calculated by a correlation in the Fourier space of contiguous images. Considering that we only have the velocity field of the surface smaller angles result in a better resolution of the existing velocity field in the near field. For general movements also larger angles can be useful, e.g. to get the direction, height and velocity of eruption clouds. In summary, it can be stated that 3D velocimetry can be used for several application and with different setup due to the application.

  16. Analysis of EEG signals regularity in adults during video game play in 2D and 3D.

    PubMed

    Khairuddin, Hamizah R; Malik, Aamir S; Mumtaz, Wajid; Kamel, Nidal; Xia, Likun

    2013-01-01

    Video games have long been part of the entertainment industry. Nonetheless, it is not well known how video games can affect us with the advancement of 3D technology. The purpose of this study is to investigate the EEG signals regularity when playing video games in 2D and 3D modes. A total of 29 healthy subjects (24 male, 5 female) with mean age of 21.79 (1.63) years participated. Subjects were asked to play a car racing video game in three different modes (2D, 3D passive and 3D active). In 3D passive mode, subjects needed to wear a passive polarized glasses (cinema type) while for 3D active, an active shutter glasses was used. Scalp EEG data was recorded during game play using 19-channel EEG machine and linked ear was used as reference. After data were pre-processed, the signal irregularity for all conditions was computed. Two parameters were used to measure signal complexity for time series data: i) Hjorth-Complexity and ii) Composite Permutation Entropy Index (CPEI). Based on these two parameters, our results showed that the complexity level increased from eyes closed to eyes open condition; and further increased in the case of 3D as compared to 2D game play. PMID:24110125

  17. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking.

    PubMed

    Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces. PMID:24593372

  18. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  19. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  20. A quality assessment of 3D video analysis for full scale rockfall experiments

    NASA Astrophysics Data System (ADS)

    Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.

    2012-04-01

    Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results

  1. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  2. Depth enhancement of S3D content and the psychological effects

    NASA Astrophysics Data System (ADS)

    Hirahara, Masahiro; Shiraishi, Saki; Kawai, Takashi

    2012-03-01

    Stereoscopic 3D (S3D) imaging technologies are widely used recently to create content for movies, TV programs, games, etc. Although S3D content differs from 2D content by the use of binocular parallax to induce depth sensation, the relationship between depth control and the user experience remains unclear. In this study, the user experience was subjectively and objectively evaluated in order to determine the effectiveness of depth control, such as an expansion or reduction or a forward or backward shift in the range of maximum parallactic angles in the cross and uncross directions (depth bracket). Four types of S3D content were used in the subjective and objective evaluations. The depth brackets of comparison stimuli were modified in order to enhance the depth sensation corresponding to the content. Interpretation Based Quality (IBQ) methodology was used for the subjective evaluation and the heart rate was measured to evaluate the physiological effect. The results of the evaluations suggest the following two points. (1) Expansion/reduction of the depth bracket affects preference and enhances positive emotions to the S3D content. (2) Expansion/reduction of the depth bracket produces above-mentioned effects more notable than shifting the cross/uncross directions.

  3. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  4. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  5. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  6. High efficient methods of content-based 3D model retrieval

    NASA Astrophysics Data System (ADS)

    Wu, Yuanhao; Tian, Ling; Li, Chenggang

    2013-03-01

    Content-based 3D model retrieval is of great help to facilitate the reuse of existing designs and to inspire designers during conceptual design. However, there is still a gap to apply it in industry due to the low time efficiency. This paper presents two new methods with high efficiency to build a Content-based 3D model retrieval system. First, an improvement is made on the "Shape Distribution (D2)" algorithm, and a new algorithm named "Quick D2" is proposed. Four sample 3D mechanical models are used in an experiment to compare the time cost of the two algorithms. The result indicates that the time cost of Quick D2 is much lower than that of D2, while the descriptors extracted by the two algorithms are almost the same. Second, an expandable 3D model repository index method with high performance, namely, RBK index, is presented. On the basis of RBK index, the search space is pruned effectively during the search process, leading to a speed up of the whole system. The factors that influence the values of the key parameters of RBK index are discussed and an experimental method to find the optimal values of the key parameters is given. Finally, "3D Searcher", a content-based 3D model retrieval system is developed. By using the methods proposed, the time cost for the system to respond one query online is reduced by 75% on average. The system has been implemented in a manufacturing enterprise, and practical query examples during a case of the automobile rear axle design are also shown. The research method presented shows a new research perspective and can effectively improve the content-based 3D model retrieval efficiency.

  7. An analysis of brightness as a factor in visual discomfort caused by watching stereoscopic 3D video

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Woo; Kang, Hang-Bong

    2015-05-01

    Even though various research has examined the factors that cause visual discomfort in watching stereoscopic 3D video, the brightness factor has not been dealt with sufficiently. In this paper, we analyze visual discomfort under various illumination conditions by considering eye-blinking rate and saccadic eye movement. In addition, we measure the perceived depth before and after watching 3D stereoscopic video by using our own 3D depth measurement instruments. Our test sequences consist of six illumination conditions for background. The illumination is changed from bright to dark or vice-versa, while the illumination of the foreground object is constant. Our test procedure is as follows: First, the subjects are rested until a baseline of no visual discomfort is established. Then, the subjects answer six questions to check their subjective pre-stimulus discomfort level. Next, we measure perceived depth for each subject, and the subjects watch 30-minute stereoscopic 3D or 2D video clips in random order. We measured eye-blinking and saccadic movements of the subject using an eye-tracking device. Then, we measured perceived depth for each subject again to detect any changes in depth perception. We also checked the subject's post-stimulus discomfort level, and measured the perceived depth after a 40-minute post-experiment resting period to measure recovery levels. After 40 minutes, most subjects returned to normal levels of depth perception. From our experiments, we found that eye-blinking rates were higher with a dark to light video progression than vice-versa. Saccadic eye movements were a lower with a dark to light video progression than viceversa.

  8. 3D UHDTV contents production with 2/3-inch sensor cameras

    NASA Astrophysics Data System (ADS)

    Hamacher, Alaric; Pardeshi, Sunil; Whangboo, Taeg-Keun; Kim, Sang-Il; Lee, Seung-Hyun

    2015-03-01

    Most UHDTV content is presently created using single large CMOS sensor cameras as opposed to 2/3-inch small sensor cameras, which is the standard for HD content. The consequence is a technical incompatibility that does not only affect the lenses and accessories of these cameras, but also the content creation process in 2D and 3D. While UHDTV is generally acclaimed for its superior image quality, the large sensors have introduced new constraints in the filming process. The camera sizes and lens dimensions have also introduced new obstacles for their use in 3D UHDTV production. The recent availability of UHDTV broadcast cameras with traditional 2/3-inch sensors can improve the transition towards UHDTV content creation. The following article will evaluate differences between the large-sensor UHDTV cameras and the 2/3-inch 3 CMOS solution and address 3D-specific considerations, such as possible artifacts like chromatic aberration and diffraction, which can occur when mixing HD and UHD equipment. The article will further present a workflow with solutions for shooting 3D UHDTV content on the basis of the Grass Valley LDX4K compact camera, which is the first available UHDTV camera with 2/3-inch UHDTV broadcast technology.

  9. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  10. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    NASA Astrophysics Data System (ADS)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  11. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  12. Evaluating the Role of Content in Subjective Video Quality Assessment

    PubMed Central

    Vrgovic, Petar

    2014-01-01

    Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA. PMID:24523643

  13. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  14. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  15. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    Jacob, J. Augustin; Kumar, N. Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  16. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  17. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction. PMID:25122851

  18. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  19. 3-D Computer Animation vs. Live-Action Video: Differences in Viewers' Response to Instructional Vignettes

    ERIC Educational Resources Information Center

    Smith, Dennie; McLaughlin, Tim; Brown, Irving

    2012-01-01

    This study explored computer animation vignettes as a replacement for live-action video scenarios of classroom behavior situations previously used as an instructional resource in teacher education courses in classroom management strategies. The focus of the research was to determine if the embedded behavioral information perceived in a live-action…

  20. High Content Imaging (HCI) on Miniaturized Three-Dimensional (3D) Cell Cultures.

    PubMed

    Joshi, Pranav; Lee, Moo-Yeal

    2015-12-01

    High content imaging (HCI) is a multiplexed cell staining assay developed for better understanding of complex biological functions and mechanisms of drug action, and it has become an important tool for toxicity and efficacy screening of drug candidates. Conventional HCI assays have been carried out on two-dimensional (2D) cell monolayer cultures, which in turn limit predictability of drug toxicity/efficacy in vivo; thus, there has been an urgent need to perform HCI assays on three-dimensional (3D) cell cultures. Although 3D cell cultures better mimic in vivo microenvironments of human tissues and provide an in-depth understanding of the morphological and functional features of tissues, they are also limited by having relatively low throughput and thus are not amenable to high-throughput screening (HTS). One attempt of making 3D cell culture amenable for HTS is to utilize miniaturized cell culture platforms. This review aims to highlight miniaturized 3D cell culture platforms compatible with current HCI technology. PMID:26694477

  1. High Content Imaging (HCI) on Miniaturized Three-Dimensional (3D) Cell Cultures

    PubMed Central

    Joshi, Pranav; Lee, Moo-Yeal

    2015-01-01

    High content imaging (HCI) is a multiplexed cell staining assay developed for better understanding of complex biological functions and mechanisms of drug action, and it has become an important tool for toxicity and efficacy screening of drug candidates. Conventional HCI assays have been carried out on two-dimensional (2D) cell monolayer cultures, which in turn limit predictability of drug toxicity/efficacy in vivo; thus, there has been an urgent need to perform HCI assays on three-dimensional (3D) cell cultures. Although 3D cell cultures better mimic in vivo microenvironments of human tissues and provide an in-depth understanding of the morphological and functional features of tissues, they are also limited by having relatively low throughput and thus are not amenable to high-throughput screening (HTS). One attempt of making 3D cell culture amenable for HTS is to utilize miniaturized cell culture platforms. This review aims to highlight miniaturized 3D cell culture platforms compatible with current HCI technology. PMID:26694477

  2. Rapid, High-Throughput Tracking of Bacterial Motility in 3D via Phase-Contrast Holographic Video Microscopy

    PubMed Central

    Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck

    2015-01-01

    Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336

  3. A 3-D nonlinear recursive digital filter for video image processing

    NASA Technical Reports Server (NTRS)

    Bauer, P. H.; Qian, W.

    1991-01-01

    This paper introduces a recursive 3-D nonlinear digital filter, which is capable of performing noise suppression without degrading important image information such as edges in space or time. It also has the property of unnoticeable bandwidth reduction immediately after a scene change, which makes the filter an attractive preprocessor to many interframe compression algorithms. The filter consists of a nonlinear 2-D spatial subfilter and a 1-D temporal filter. In order to achieve the required computational speed and increase the flexibility of the filter, all of the linear shift-variant filter modules are of the IIR type.

  4. Adaptation of video game UVW mapping to 3D visualization of gene expression patterns

    NASA Astrophysics Data System (ADS)

    Vize, Peter D.; Gerth, Victor E.

    2007-01-01

    Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.

  5. Best Practices for Producing Video Content for Teacher Education

    ERIC Educational Resources Information Center

    Brunvand, Stein

    2010-01-01

    Through the use of Web 2.0 technologies the production and distribution of professional digital video content for use in teacher education has become more prevalent. As teachers look to learn from and interact with this video content, they need explicit support to help draw their attention to specific pedagogical strategies and reduce cognitive…

  6. Three factors that influence the overall quality of the stereoscopic 3D content: image quality, comfort, and realism

    NASA Astrophysics Data System (ADS)

    Vlad, Raluca; Ladret, Patricia; Guérin, Anne

    2013-01-01

    In today's context, where 3D content is more abundant than ever and its acceptance by the public is probably de_nitive, there are many discussions on controlling and improving the 3D quality. But what does this notion represent precisely? How can it be formalized and standardized? How can it be correctly evaluated? A great number of studies have investigated these matters and many interesting approaches have been proposed. Despite this, no universal 3D quality model has been accepted so far that would allow a uniform across studies assessment of the overall quality of 3D content, as it is perceived by the human observers. In this paper, we are making a step forward in the development of a 3D quality model, by presenting the results of an exploratory study in which we started from the premise that the overall 3D perceived quality is a multidimensional concept that can be explained by the physical characteristics of the 3D content. We investigated the spontaneous impressions of the participants while watching varied 3D content, we analyzed the key notions that appeared in their discourse and identi_ed correlations between their judgments and the characteristics of our database. The test proved to be rich in results. Among its conclusions, we consider of highest importance the fact that we could thus determine three di_erent perceptual attributes ( image quality, comfort and realism ( that could constitute a _rst simplistic model for assessing the perceived 3D quality.

  7. High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2011-03-01

    Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.

  8. Fast repurposing of high-resolution stereo video content for mobile use

    NASA Astrophysics Data System (ADS)

    Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas

    2012-06-01

    3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.

  9. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  10. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance

    NASA Astrophysics Data System (ADS)

    Qiu, Jimmy; Hope, Andrew J.; Cho, B. C. John; Sharpe, Michael B.; Dickie, Colleen I.; DaCosta, Ralph S.; Jaffray, David A.; Weersink, Robert A.

    2012-10-01

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ˜2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  11. Fast phase-added stereogram algorithm for generation of photorealistic 3D content.

    PubMed

    Kang, Hoonjong; Stoykova, Elena; Yoshikawa, Hiroshi

    2016-01-20

    A new phase-added stereogram algorithm for accelerated computation of holograms from a point cloud model is proposed. The algorithm relies on the hologram segmentation, sampling of directional information, and usage of the fast Fourier transform with a finer grid in the spatial frequency domain than is provided by the segment size. The algorithm gives improved quality of reconstruction due to new phase compensation introduced in the segment fringe patterns. The result is finer beam steering leading to high peak intensity and a large peak signal-to-noise ratio in reconstruction. The feasibility of the algorithm is checked by the generation of 3D contents for a color wavefront printer. PMID:26835945

  12. 3-D eye movement measurements on four Comex's divers using video CCD cameras, during high pressure diving.

    PubMed

    Guillemant, P; Ulmer, E; Freyss, G

    1995-01-01

    Previous studies have shown the vulnerability of the vestibular system regarding barotraumatism (1) and deep diving may induce immediate neurological changes (2). These extreme conditions (high pressure, limited examination time, restricted space, hydrogen-oxygen mixture, communication difficulties etc.) require adapted technology and associated fast experimental procedure. We were able to solve these problems by developing a new system of 3-D ocular movements on line analysis by means of a video camera. This analyser uses image processing and forms recognition software which allows non-invasive video frequency calculation of eye movements including torsional component. As this system is immediately ready for use, we were able to realize the subsequent examinations in a maximum time of 8 min for each diver: oculomotor tests including saccadic, slow and optokinetic traditional automatic measurements; vestibular tests regarding spontaneous and positional nystagmus, and reactional nystagmus to the pendular test. For pendular induced nystagmus we used appropriate head positions to stimulate separately the lateral and the posterior semicircular canal, and we measured the gain by operating successively in visible light and complete darkness. Recordings were done during a simulated onshore dive to an ambient pressure corresponding to a depth of 350 m. The above examinations were completed on the first and last days by caloric tests with the same video system analyser. The results of the investigations demonstrated perfect tolerance of the oculomotor and vestibular systems of these 4 divers thus fulfilling the preventive conditions defined by Comex Co. We were able to overcome the limitations due to low cost PC computer operation and cameras (necessity of adaptation to pressure, focus difficulties and direct light exposure eye reflexions). We still have on line accurate measurements even on the torsional component of the eye movement. Due to this technological efficiency

  13. Creating 3D realistic head: from two orthogonal photos to multiview face contents

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Lin, Qian; Tang, Feng; Tang, Liang; Lim, Sukhwan; Wang, Shengjin

    2011-03-01

    3D Head models have many applications, such as virtual conference, 3D web game, and so on. The existing several web-based face modeling solutions that can create a 3D face model from one or two user uploaded face images, are limited to generating the 3D model of only face region. The accuracy of such reconstruction is very limited for side views, as well as hair regions. The goal of our research is to develop a framework for reconstructing the realistic 3D human head based on two approximate orthogonal views. Our framework takes two images, and goes through segmentation, feature points detection, 3D bald head reconstruction, 3D hair reconstruction and texture mapping to create a 3D head model. The main contribution of the paper is that the processing steps are applies to both the face region as well as the hair region.

  14. Soil water content variability in the 3D 'support-spacing-extent' space of scale metrics

    NASA Astrophysics Data System (ADS)

    Pachepsky, Yakov; Martinez, Gonzalo; Vereecken, Harry

    2014-05-01

    Knowledge of soil water content variability provides important insight into soil functioning, and is essential in many applications. This variability is known to be scale-dependent, and divergent statements about the change of the variability magnitude with scale can be found in literature. We undertook a systematic review to see how the definition of scale can affect conclusions about the scale-dependence in soil water content variability. Support, spacing, and extent are three metrics used to characterize scale in hydrology. Available data sets describe changes in soil moisture variability with changes in one or more of these scale metrics. We found six types of experiments with the scale change. With data obtained without a change in extent, the scale change in some cases consisted in the simultaneous change of support and spacing. This was done with remote sensing data, and the power law decrease in variance with support increase was found. Datasets that were collected with different support or sample volumes for the same extent and spacing showed the decrease of variance as the sample size increased. A variance increase was common when the scale change consisted in change in spacing without the change in supports and extents. An increase in variance with the extent of the study area was demonstrated with data an evolution of variability with increasing size of the area under investigation (extent) without modification of support. The variance generally increased with the extent when the spacing was changed so that the change in variability at areas of different sizes was studied with the same number of samples with equal support. Finally, there are remote sensing datasets that document decrease in variability with a change in extent for a given support without modification of spacing. Overall, published information on the effect of scale on soil water content variability in the 3D space of scale metrics did not contain controversies in qualitative terms

  15. High-content 3D multicolor super-resolution localization microscopy.

    PubMed

    Pereira, Pedro M; Almada, Pedro; Henriques, Ricardo

    2015-01-01

    Super-resolution (SR) methodologies permit the visualization of cellular structures at near-molecular scale (1-30 nm), enabling novel mechanistic analysis of key events in cell biology not resolvable by conventional fluorescence imaging (∼300-nm resolution). When this level of detail is combined with computing power and fast and reliable analysis software, high-content screenings using SR becomes a practical option to address multiple biological questions. The importance of combining these powerful analytical techniques cannot be ignored, as they can address phenotypic changes on the molecular scale and in a statistically robust manner. In this work, we suggest an easy-to-implement protocol that can be applied to set up a high-content 3D SR experiment with user-friendly and freely available software. The protocol can be divided into two main parts: chamber and sample preparation, where a protocol to set up a direct STORM (dSTORM) sample is presented; and a second part where a protocol for image acquisition and analysis is described. We intend to take the reader step-by-step through the experimental process highlighting possible experimental bottlenecks and possible improvements based on recent developments in the field. PMID:25640426

  16. An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

    NASA Astrophysics Data System (ADS)

    Celli, Jonathan P.; Rizvi, Imran; Blanden, Adam R.; Massodi, Iqbal; Glidden, Michael D.; Pogue, Brian W.; Hasan, Tayyaba

    2014-01-01

    While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening.

  17. Arcade Video Games: Proxemic, Cognitive and Content Analyses.

    ERIC Educational Resources Information Center

    Braun, Claude M. J.; Giroux, Josette

    1989-01-01

    A study was designed to determine psychological complexity and reinforcement characteristics of popular arcade video games, including sex differences in game content, clientele social structure, human-to-human interaction contingencies, and value content. Results suggest a need for public control of children's access to the games and the video…

  18. Reading Function and Content Words in Subtitled Videos

    ERIC Educational Resources Information Center

    Krejtz, Izabela; Szarkowska, Agnieszka; Loginska, Maria

    2016-01-01

    In this study, we examined how function and content words are read in intra- and interlingual subtitles. We monitored eye movements of a group of 39 deaf, 27 hard of hearing, and 56 hearing Polish participants while they viewed English and Polish videos with Polish subtitles. We found that function words and short content words received less…

  19. Deriving video content type from HEVC bitstream semantics

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  20. 3D leaf water content mapping using terrestrial laser scanner backscatter intensity with radiometric correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Wang, Tiejun; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Niemann, K. Olaf

    2015-12-01

    Leaf water content (LWC) plays an important role in agriculture and forestry management. It can be used to assess drought conditions and wildfire susceptibility. Terrestrial laser scanner (TLS) data have been widely used in forested environments for retrieving geometrically-based biophysical parameters. Recent studies have also shown the potential of using radiometric information (backscatter intensity) for estimating LWC. However, the usefulness of backscatter intensity data has been limited by leaf surface characteristics, and incidence angle effects. To explore the idea of using LiDAR intensity data to assess LWC we normalized (for both angular effects and leaf surface properties) shortwave infrared TLS data (1550 nm). A reflectance model describing both diffuse and specular reflectance was applied to remove strong specular backscatter intensity at a perpendicular angle. Leaves with different surface properties were collected from eight broadleaf plant species for modeling the relationship between LWC and backscatter intensity. Reference reflectors (Spectralon from Labsphere, Inc.) were used to build a look-up table to compensate for incidence angle effects. Results showed that before removing the specular influences, there was no significant correlation (R2 = 0.01, P > 0.05) between the backscatter intensity at a perpendicular angle and LWC. After the removal of the specular influences, a significant correlation emerged (R2 = 0.74, P < 0.05). The agreement between measured and TLS-derived LWC demonstrated a significant reduction of RMSE (root mean square error, from 0.008 to 0.003 g/cm2) after correcting for the incidence angle effect. We show that it is possible to use TLS to estimate LWC for selected broadleaved plants with an R2 of 0.76 (significance level α = 0.05) at leaf level. Further investigations of leaf surface and internal structure will likely result in improvements of 3D LWC mapping for studying physiology and ecology in vegetation.

  1. Temporal monitoring and quantification of hydrothermal activity from photomosaics and 3D video reconstruction: The Lucky Strike hydrothermal field

    NASA Astrophysics Data System (ADS)

    Barreyre, T.; Escartin, J.; Cannat, M.; Garcia, R. A.

    2011-12-01

    Seafloor imagery provides detailed and accurate constrain on the distribution, geometry, and nature of hydrothermal outflow, and its links to the ecosystems that they sustain. Repeated surveys allow us to evaluate the temporal variability of these systems. Geo-referenced and co-registered photomosaics of the Lucky Strike hydrothermal field (Mid Atlantic Ridge, 37°N), derived from >60,000 seafloor images acquired in 1996, 2006, 2008 and 2009, using deep-towed and ROV vehicles. Newly-developed image processing techniques, specifically tailored to generate giga-mosaics in the underwater environment, include correction of illumination artifacts and removal of the edges between individual images so as to obtain a continuous and single mosaic image over a surface of up ~800x800 m and with a pixel resolution of 5-10 mm. Photomosaicing is complemented by 3D-reconstruction of hydrothermal edifices from video imagery, with the mapping of image texture over the 3D model surface. These image and video data can also be directly linked with high-resolution microbathymetry acquired near-bottom acoustic systems. Preliminary analysis of these mosaics reveals the distribution of low-temperature hydrothermal outflow, recognizable owing to its association with bacterial mats and hydrothermal deposits easily identifiable in the imagery. These low-temperature venting areas, often associated with high-temperature hydrothermal vents, are irregularly distributed throughout the site, defining clusters. In detail, the outflow geometry is largely controlled by the nature of the substrate (e.g., cracks and fissures, diffuse flow patches, existing hydrothermal constructs). The spatial relationships between the high- and diffuse venting as revealed by the imagery provide constraints on the shallow plumbing structure throughout the site.. Imagery provides constraints on temporal variability at two time-scales. First, we can identify changes in the distribution and presence of actively venting

  2. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  3. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  4. Reading while Watching Video: The Effect of Video Content on Reading Comprehension and Media Multitasking Ability

    ERIC Educational Resources Information Center

    Lin, Lin; Lee, Jennifer; Robertson, Tip

    2011-01-01

    Media multitasking, or engaging in multiple media and tasks simultaneously, is becoming an increasingly popular phenomenon with the development and engagement in social media. This study examines to what extent video content affects students' reading comprehension in media multitasking environments. One hundred and thirty university students were…

  5. An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

    PubMed Central

    Celli, Jonathan P.; Rizvi, Imran; Blanden, Adam R.; Massodi, Iqbal; Glidden, Michael D.; Pogue, Brian W.; Hasan, Tayyaba

    2014-01-01

    While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening. PMID:24435043

  6. Content-based video indexing and searching with wavelet transformation

    NASA Astrophysics Data System (ADS)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  7. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  8. A Usability Survey of a Contents-Based Video Retrieval System by Combining Digital Video and an Electronic Bulletin Board

    ERIC Educational Resources Information Center

    Haga, Hirohide; Kaneda, Shigeo

    2005-01-01

    This article describes the survey of the usability of a novel content-based video retrieval system. This system combines video streaming and an electronic bulletin board system (BBS). Comments submitted to the BBS are used to index video data. Following the development of the prototype system an experimental survey with ten subjects was performed.…

  9. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  10. Evaluation of vision training using 3D play game

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun

    2015-03-01

    The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.

  11. 3D high-content screening for the identification of compounds that target cells in dormant tumor spheroid regions

    SciTech Connect

    Wenzel, Carsten; Riefke, Björn; Gründemann, Stephan; Krebs, Alice; Christian, Sven; Prinz, Florian; Osterland, Marc; Golfier, Sven; Räse, Sebastian; Ansari, Nariman; Esner, Milan; Bickle, Marc; Pampaloni, Francesco; Mattheyer, Christian; Stelzer, Ernst H.; Parczyk, Karsten; Prechtl, Stefan; Steigemann, Patrick

    2014-04-15

    Cancer cells in poorly vascularized tumor regions need to adapt to an unfavorable metabolic microenvironment. As distance from supplying blood vessels increases, oxygen and nutrient concentrations decrease and cancer cells react by stopping cell cycle progression and becoming dormant. As cytostatic drugs mainly target proliferating cells, cancer cell dormancy is considered as a major resistance mechanism to this class of anti-cancer drugs. Therefore, substances that target cancer cells in poorly vascularized tumor regions have the potential to enhance cytostatic-based chemotherapy of solid tumors. With three-dimensional growth conditions, multicellular tumor spheroids (MCTS) reproduce several parameters of the tumor microenvironment, including oxygen and nutrient gradients as well as the development of dormant tumor regions. We here report the setup of a 3D cell culture compatible high-content screening system and the identification of nine substances from two commercially available drug libraries that specifically target cells in inner MCTS core regions, while cells in outer MCTS regions or in 2D cell culture remain unaffected. We elucidated the mode of action of the identified compounds as inhibitors of the respiratory chain and show that induction of cell death in inner MCTS core regions critically depends on extracellular glucose concentrations. Finally, combinational treatment with cytostatics showed increased induction of cell death in MCTS. The data presented here shows for the first time a high-content based screening setup on 3D tumor spheroids for the identification of substances that specifically induce cell death in inner tumor spheroid core regions. This validates the approach to use 3D cell culture screening systems to identify substances that would not be detectable by 2D based screening in otherwise similar culture conditions. - Highlights: • Establishment of a novel method for 3D cell culture based high-content screening. • First reported high-content

  12. Evaluation of Structure from Motion Software to Create 3D Models of Late Nineteenth Century Great Lakes Shipwrecks Using Archived Diver-Acquired Video Surveys

    NASA Astrophysics Data System (ADS)

    Mertes, J.; Thomsen, T.; Gulley, J.

    2014-12-01

    Here we demonstrate the ability to use archived video surveys to create photorealistic 3D models of submerged archeological sites. We created 3D models of two nineteenth century Great Lakes shipwrecks using diver-acquired video surveys and Structure from Motion (SfM) software. Models were georeferenced using archived hand survey data. Comparison of hand survey measurements and digital measurements made using the models demonstrate that spatial analysis produces results with reasonable accuracy when wreck maps are available. Error associated with digital measurements displayed an inverse relationship to object size. Measurement error ranged from a maximum of 18 % (on 0.37 m object) and a minimum of 0.56 % (on a 4.21 m object). Our results demonstrate SfM can generate models of large maritime archaeological sites that for research, education and outreach purposes. Where site maps are available, these 3D models can be georeferenced to allow additional spatial analysis long after on-site data collection.

  13. Making Web3D Less Scary: Toward Easy-to-Use Web3D e-Learning Content Development Tools for Educators

    ERIC Educational Resources Information Center

    de Byl, Penny

    2009-01-01

    Penny de Byl argues that one of the biggest challenges facing educators today is the integration of rich and immersive three-dimensional environments with existing teaching and learning materials. To empower educators with the ability to embrace emerging Web3D technologies, the Advanced Learning and Immersive Virtual Environment (ALIVE) research…

  14. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  15. Interaction and behaviour imaging: a novel method to measure mother-infant interaction using video 3D reconstruction.

    PubMed

    Leclère, C; Avril, M; Viaux-Savelon, S; Bodeau, N; Achard, C; Missonnier, S; Keren, M; Feldman, R; Chetouani, M; Cohen, D

    2016-01-01

    Studying early interaction is essential for understanding development and psychopathology. Automatic computational methods offer the possibility to analyse social signals and behaviours of several partners simultaneously and dynamically. Here, 20 dyads of mothers and their 13-36-month-old infants were videotaped during mother-infant interaction including 10 extremely high-risk and 10 low-risk dyads using two-dimensional (2D) and three-dimensional (3D) sensors. From 2D+3D data and 3D space reconstruction, we extracted individual parameters (quantity of movement and motion activity ratio for each partner) and dyadic parameters related to the dynamics of partners heads distance (contribution to heads distance), to the focus of mutual engagement (percentage of time spent face to face or oriented to the task) and to the dynamics of motion activity (synchrony ratio, overlap ratio, pause ratio). Features are compared with blind global rating of the interaction using the coding interactive behavior (CIB). We found that individual and dyadic parameters of 2D+3D motion features perfectly correlates with rated CIB maternal and dyadic composite scores. Support Vector Machine classification using all 2D-3D motion features classified 100% of the dyads in their group meaning that motion behaviours are sufficient to distinguish high-risk from low-risk dyads. The proposed method may present a promising, low-cost methodology that can uniquely use artificial technology to detect meaningful features of human interactions and may have several implications for studying dyadic behaviours in psychiatry. Combining both global rating scales and computerized methods may enable a continuum of time scale from a summary of entire interactions to second-by-second dynamics. PMID:27219342

  16. Real time planning, guidance and validation of surgical acts using 3D segmentations, augmented reality projections and surgical tools video tracking

    NASA Astrophysics Data System (ADS)

    Osorio, Angel; Galan, Juan-Antonio; Nauroy, Julien; Donars, Patricia

    2010-02-01

    When performing laparoscopies and punctures, the precise anatomic localizations are required. Current techniques very often rely on the mapping between the real situation and preoperative images. The PC based software we present realizes 3D segmentations of regions of interest from CT or MR slices. It allows the planning of punctures or trocars insertion trajectories, taking anatomical constraints into account. Geometrical transformations allow the projection over the patient's body of the organs and lesions shapes, realistically reconstructed, using a standard video projector in the operating room. We developed specific image processing software which automatically segments and registers images of a webcam used in the operating room to give feedback to the user.

  17. Reading Function and Content Words in Subtitled Videos.

    PubMed

    Krejtz, Izabela; Szarkowska, Agnieszka; Łogińska, Maria

    2016-04-01

    In this study, we examined how function and content words are read in intra- and interlingual subtitles. We monitored eye movements of a group of 39 deaf, 27 hard of hearing, and 56 hearing Polish participants while they viewed English and Polish videos with Polish subtitles. We found that function words and short content words received less visual attention than longer content words, which was reflected in shorter dwell time, lower number of fixations, shorter first fixation duration, and lower subject hit count. Deaf participants dwelled significantly longer on function words than other participants, which may be an indication of their difficulty in processing this type of words. The findings are discussed in the context of classical reading research and applied research on subtitling. PMID:26681268

  18. Video contents authoring system for efficient consumption on portable multimedia device

    NASA Astrophysics Data System (ADS)

    Min, Hyun-Seok; Jin, Sung Ho; Lee, Young Bok; Ro, Yong Man

    2008-02-01

    In a mobile consumption environment, users not only desire to preview video contents with highlights, but also desire to consume attractive segments of the video rather than the whole video. Thus, condensed representation of video contents which can represent the whole video content and video structure is demanded. In this paper, we propose a video content authoring system allowing content authors to filter the video structure and to compose contents and metadata efficiently and effectively. The proposed authoring system consists of two modules: video analyzer and metadata generator. A video analyzer detects shot boundaries and scenes and establishes temporal segmentation metadata including shot and scene boundary information. The shot detection adopts adaptive thresholding with different multiple windows to segment the raw video into shots. The segmented shots are grouped and merged depending on similarity between adjacent shots. In order to minimize the consumption time of the shot clustering, we apply a span as a computation unit, which is defined as aggression of successive shots. A metadata generator allows authors to edit the video metadata in addition to temporal segmentation metadata which is detected by a video analyzer. The video metadata supports hierarchical representation of individual shot and scene.

  19. 3D-Reconstruction of recent volcanic activity from ROV-video, Charles Darwin Seamounts, Cape Verdes

    NASA Astrophysics Data System (ADS)

    Kwasnitschka, T.; Hansteen, T. H.; Kutterolf, S.; Freundt, A.; Devey, C. W.

    2011-12-01

    As well as providing well-localized samples, Remotely Operated Vehicles (ROVs) produce huge quantities of visual data whose potential for geological data mining has seldom if ever been fully realized. We present a new workflow to derive essential results of field geology such as quantitative stratigraphy and tectonic surveying from ROV-based photo and video material. We demonstrate the procedure on the Charles Darwin Seamounts, a field of small hot spot volcanoes recently identified at a depth of ca. 3500m southwest of the island of Santo Antao in the Cape Verdes. The Charles Darwin Seamounts feature a wide spectrum of volcanic edifices with forms suggestive of scoria cones, lava domes, tuff rings and maar-type depressions, all of comparable dimensions. These forms, coupled with the highly fragmented volcaniclastic samples recovered by dredging, motivated surveying parts of some edifices down to centimeter scale. ROV-based surveys yielded volcaniclastic samples of key structures linked by extensive coverage of stereoscopic photographs and high-resolution video. Based upon the latter, we present our workflow to derive three-dimensional models of outcrops from a single-camera video sequence, allowing quantitative measurements of fault orientation, bedding structure, grain size distribution and photo mosaicking within a geo-referenced framework. With this information we can identify episodes of repetitive eruptive activity at individual volcanic centers and see changes in eruptive style over time, which, despite their proximity to each other, is highly variable.

  20. Effects of scene content and layout on the perceived light direction in 3D spaces.

    PubMed

    Xia, Ling; Pont, Sylvia C; Heynderickx, Ingrid

    2016-08-01

    The lighting and furnishing of an interior space (i.e., the reflectance of its materials, the geometries of the furnishings, and their arrangement) determine the appearance of this space. Conversely, human observers infer lighting properties from the space's appearance. We conducted two psychophysical experiments to investigate how the perception of the light direction is influenced by a scene's objects and their layout using real scenes. In the first experiment, we confirmed that the shape of the objects in the scene and the scene layout influence the perceived light direction. In the second experiment, we systematically investigated how specific shape properties influenced the estimation of the light direction. The results showed that increasing the number of visible faces of an object, ultimately using globally spherical shapes in the scene, supported the veridicality of the estimated light direction. Furthermore, symmetric arrangements in the scene improved the estimation of the tilt direction. Thus, human perception of light should integrally consider materials, scene content, and layout. PMID:27548091

  1. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  2. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  3. Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Hu; Wu, Ja-Ling

    With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.

  4. Comprehensive depth estimation algorithm for efficient stereoscopic content creation in three-dimensional video systems

    NASA Astrophysics Data System (ADS)

    Xu, Huihui; Jiang, Mingyan

    2015-07-01

    Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.

  5. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents

    PubMed Central

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C. M. E.; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11–15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the “at-risk” cut-off on the Spence Children Anxiety Survey were eligible. Adolescents’ anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents’ anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants’ expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292

  6. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents.

    PubMed

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C M E; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11-15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the "at-risk" cut-off on the Spence Children Anxiety Survey were eligible. Adolescents' anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents' anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants' expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292

  7. Fully Automated One-Step Production of Functional 3D Tumor Spheroids for High-Content Screening.

    PubMed

    Monjaret, François; Fernandes, Mathieu; Duchemin-Pelletier, Eve; Argento, Amelie; Degot, Sébastien; Young, Joanne

    2016-04-01

    Adoption of spheroids within high-content screening (HCS) has lagged behind high-throughput screening (HTS) due to issues with running complex assays on large three-dimensional (3D) structures.To enable multiplexed imaging and analysis of spheroids, different cancer cell lines were grown in 3D on micropatterned 96-well plates with automated production of nine uniform spheroids per well. Spheroids achieve diameters of up to 600 µm, and reproducibility was experimentally validated (interwell and interplate CV(diameter) <5%). Biphoton imaging confirmed that micropatterned spheroids exhibit characteristic cell heterogeneity with distinct microregions. Furthermore, central necrosis appears at a consistent spheroid size, suggesting standardized growth.Using three reference compounds (fluorouracil, irinotecan, and staurosporine), we validated HT-29 micropatterned spheroids on an HCS platform, benchmarking against hanging-drop spheroids. Spheroid formation and imaging in a single plate accelerate assay workflow, and fixed positioning prevents structures from overlapping or sticking to the well wall, augmenting image processing reliability. Furthermore, multiple spheroids per well increase the statistical confidence sufficiently to discriminate compound mechanisms of action and generate EC50 values for endpoints of cell death, architectural change, and size within a single-pass read. Higher quality data and a more efficient HCS work chain should encourage integration of micropatterned spheroid models within fundamental research and drug discovery applications. PMID:26385905

  8. Virtual muscularity: a content analysis of male video game characters.

    PubMed

    Martins, Nicole; Williams, Dmitri C; Ratan, Rabindra A; Harrison, Kristen

    2011-01-01

    The 150 top-selling video games were content analyzed to study representations of male bodies. Human males in the games were captured via screenshot and body parts measured. These measurements were then compared to anthropometric data drawn from a representative sample of 1120 North American men. Characters at high levels of photorealism were larger than the average American male, but these characters did not mirror the V-shaped ideal found in mainstream media. Characters at low levels of photorealism were also larger than the average American male, but these characters were so much larger that they appeared cartoonish. Idealized male characters were more likely to be found in games for children than in games for adults. Implications for cultivation theory are discussed. PMID:21093394

  9. Mini-pillar array for hydrogel-supported 3D culture and high-content histologic analysis of human tumor spheroids.

    PubMed

    Kang, Jihoon; Lee, Dong Woo; Hwang, Hyun Ju; Yeon, Sang-Eun; Lee, Moo-Yeal; Kuh, Hyo-Jeong

    2016-06-21

    Three-dimensional (3D) cancer cell culture models mimic the complex 3D organization and microenvironment of human solid tumor tissue and are thus considered as highly predictive models representing avascular tumor regions. Confocal laser scanning microscopy is useful for monitoring drug penetration and therapeutic responses in 3D tumor models; however, photonic attenuation at increasing imaging depths and limited penetration of common fluorescence tracers are significant technical challenges to imaging. Immunohistological staining would be a good alternative, but the preparation of tissue sections from rather fragile spheroids through fixing and embedding procedures is challenging. Here we introduce a novel 3 × 3 mini-pillar array chip that can be utilized for 3D cell culturing and sectioning for high-content histologic analysis. The mini-pillar array chip facilitated the generation of 3D spheroids of human cancer cells within hydrogels such as alginate, collagen, and Matrigel. As expected, visualization of the 3D distribution of calcein AM and doxorubicin by optical sectioning was limited by photonic attenuation and dye penetration. The integrity of the 3D microtissue section was confirmed by immunostaining on paraffin sections and cryo-sections. The applicability of the mini-pillar array for drug activity evaluation was tested by measuring viability changes in spheroids exposed to anti-cancer agents, 5-fluorouracil and tirapazamine. Thus, our novel mini-pillar array platform can potentially promote high-content histologic analysis of 3D cultures and can be further optimized for field-specific needs. PMID:27194205

  10. An Automatic Multimedia Content Summarization System for Video Recommendation

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Huang, Yi Ting; Tsai, Chi Cheng; Chung, Ching I.; Wu, Yu Chieh

    2009-01-01

    In recent years, using video as a learning resource has received a lot of attention and has been successfully applied to many learning activities. In comparison with text-based learning, video learning integrates more multimedia resources, which usually motivate learners more than texts. However, one of the major limitations of video learning is…

  11. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  12. The ATLAS3D project - XIX. The hot gas content of early-type galaxies: fast versus slow rotators

    NASA Astrophysics Data System (ADS)

    Sarzi, Marc; Alatalo, Katherine; Blitz, Leo; Bois, Maxime; Bournaud, Frédéric; Bureau, Martin; Cappellari, Michele; Crocker, Alison; Davies, Roger L.; Davis, Timothy A.; de Zeeuw, P. T.; Duc, Pierre-Alain; Emsellem, Eric; Khochfar, Sadegh; Krajnović, Davor; Kuntschner, Harald; Lablanche, Pierre-Yves; McDermid, Richard M.; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Scott, Nicholas; Serra, Paolo; Young, Lisa M.; Weijmans, Anne-Marie

    2013-07-01

    For early-type galaxies, the ability to sustain a corona of hot, X-ray-emitting gas could have played a key role in quenching their star formation history. A halo of hot gas may act as an effective shield against the acquisition of cold gas and can quickly absorb stellar mass loss material. Yet, since the discovery by the Einstein Observatory of such X-ray haloes around early-type galaxies, the precise amount of hot gas around these galaxies still remains a matter of debate. By combining homogeneously derived photometric and spectroscopic measurements for the early-type galaxies observed as part of the ATLAS3D integral field survey with measurements of their X-ray luminosity based on X-ray data of both low and high spatial resolution (for 47 and 19 objects, respectively) we conclude that the hot gas content of early-type galaxies can depend on their dynamical structure. Specifically, whereas slow rotators generally have X-ray haloes with luminosity LX, gas and temperature T values that are well in line with what is expected if the hot gas emission is sustained by the thermalization of the kinetic energy carried by the stellar mass loss material, fast rotators tend to display LX, gas values that fall consistently below the prediction of this model, with similar T values that do not scale with the stellar kinetic energy (traced by the stellar velocity dispersion) as observed in the case of slow rotators. Such a discrepancy between the hot gas content of slow and fast rotators would appear to reduce, or even disappear, for large values of the dynamical mass (above ˜3 × 1011 M⊙), with younger fast rotators displaying also somewhat larger LX, gas values possibly owing to the additional energy input from recent supernovae explosions. Considering that fast rotators are likely to be intrinsically flatter than slow rotators, and that the few LX, gas-deficient slow rotators also happen to be relatively flat, the observed LX, gas deficiency in these objects would support

  13. Video Production as an Instructional Strategy: Content Learning and Teacher Practice

    ERIC Educational Resources Information Center

    Norton, Priscilla; Hathaway, Dawn

    2010-01-01

    This study examined teacher-learners' reflections about the use of video production in their K-12 classrooms for evidence of content learning, the factors facilitating teacher use of video production, and the challenges teachers reported. Findings demonstrated positive content learning outcomes as measured by objective tests, rubrics, and…

  14. 100 Million Views of Electronic Cigarette YouTube Videos and Counting: Quantification, Content Evaluation, and Engagement Levels of Videos

    PubMed Central

    2016-01-01

    Background The video-sharing website, YouTube, has become an important avenue for product marketing, including tobacco products. It may also serve as an important medium for promoting electronic cigarettes, which have rapidly increased in popularity and are heavily marketed online. While a few studies have examined a limited subset of tobacco-related videos on YouTube, none has explored e-cigarette videos’ overall presence on the platform. Objective To quantify e-cigarette-related videos on YouTube, assess their content, and characterize levels of engagement with those videos. Understanding promotion and discussion of e-cigarettes on YouTube may help clarify the platform’s impact on consumer attitudes and behaviors and inform regulations. Methods Using an automated crawling procedure and keyword rules, e-cigarette-related videos posted on YouTube and their associated metadata were collected between July 1, 2012, and June 30, 2013. Metadata were analyzed to describe posting and viewing time trends, number of views, comments, and ratings. Metadata were content coded for mentions of health, safety, smoking cessation, promotional offers, Web addresses, product types, top-selling brands, or names of celebrity endorsers. Results As of June 30, 2013, approximately 28,000 videos related to e-cigarettes were captured. Videos were posted by approximately 10,000 unique YouTube accounts, viewed more than 100 million times, rated over 380,000 times, and commented on more than 280,000 times. More than 2200 new videos were being uploaded every month by June 2013. The top 1% of most-viewed videos accounted for 44% of total views. Text fields for the majority of videos mentioned websites (70.11%); many referenced health (13.63%), safety (10.12%), smoking cessation (9.22%), or top e-cigarette brands (33.39%). The number of e-cigarette-related YouTube videos was projected to exceed 65,000 by the end of 2014, with approximately 190 million views. Conclusions YouTube is a major

  15. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  16. Content Area Vocabulary Videos in Multiple Contexts: A Pedagogical Tool

    ERIC Educational Resources Information Center

    Webb, C. Lorraine; Kapavik, Robin Robinson

    2015-01-01

    The authors challenged pre-service teachers to digitally define a social studies or mathematical vocabulary term in multiple contexts using a digital video camera. The researchers sought to answer the following questions: 1. How will creating a video for instruction affect pre-service teachers' attitudes about teaching with technology, if at…

  17. 3D full-waveform inversion of time-lapse horizontal borehole GPR data to map soil water content variability

    NASA Astrophysics Data System (ADS)

    Klotzsche, A.; Van Der Kruk, J.; Oberroehrmann, M.; Vanderborght, J.; Vereecken, H.

    2015-12-01

    Soil moisture is a key state variable that controls water and mass fluxes in soil-plant systems and is variable in space and time. Over the last year's, hydrogeophysical methods such as ground penetrating radar (GPR) have been used to determine electromagnetic properties as proxies for soil water content (SWC). Here, we combined zero-offset-profiles (ZOP) GPR measurements within multiple horizontal minirhizotubes at different depths to determine the spatial and temporal variability of SWC under a winter wheat stand at the Selhausen test site (Germany). We studied spatio-temporal variations of SWC under three different treatments: rainfed, irrigated and sheltered. We acquired 15 time-lapse ZOP GPR dataset during the growing season of the wheat in the rhizotron facility using horizontal boreholes with a separation of 0.75m and a length of 6m at six depths between 0.1-1.2m. The obtained radar velocities were converted to SWC using the 4-phase volumetric complex refractive index model. SWC values obtained using standard ray-based processing methods were not reliable close to the surface (0.1-0.2m depth) because of the inference of the critically refracted air wave and the direct wave through the subsurface. Therefore, we implemented a full-waveform inversion that uses accurate 3D forward modeling of GPRMax that incorporates the air and soil interactions. The shuffled complex evolution (SCE) method allowed us to retrieve quantitative medium properties that explained the measured data with a R² of at least 0.95, and improved SWC estimates at all depths. The final SWC distributions for wet and dry conditions showed that the vertical variability is significantly larger than the lateral variability caused by strong influence of precipitation and irrigation events.

  18. A System for True and False Memory Prediction Based on 2D and 3D Educational Contents and EEG Brain Signals.

    PubMed

    Bamatraf, Saeed; Hussain, Muhammad; Aboalsamh, Hatim; Qazi, Emad-Ul-Haq; Malik, Amir Saeed; Amin, Hafeez Ullah; Mathkour, Hassan; Muhammad, Ghulam; Imran, Hafiz Muhammad

    2016-01-01

    We studied the impact of 2D and 3D educational contents on learning and memory recall using electroencephalography (EEG) brain signals. For this purpose, we adopted a classification approach that predicts true and false memories in case of both short term memory (STM) and long term memory (LTM) and helps to decide whether there is a difference between the impact of 2D and 3D educational contents. In this approach, EEG brain signals are converted into topomaps and then discriminative features are extracted from them and finally support vector machine (SVM) which is employed to predict brain states. For data collection, half of sixty-eight healthy individuals watched the learning material in 2D format whereas the rest watched the same material in 3D format. After learning task, memory recall tasks were performed after 30 minutes (STM) and two months (LTM), and EEG signals were recorded. In case of STM, 97.5% prediction accuracy was achieved for 3D and 96.6% for 2D and, in case of LTM, it was 100% for both 2D and 3D. The statistical analysis of the results suggested that for learning and memory recall both 2D and 3D materials do not have much difference in case of STM and LTM. PMID:26819593

  19. A System for True and False Memory Prediction Based on 2D and 3D Educational Contents and EEG Brain Signals

    PubMed Central

    2016-01-01

    We studied the impact of 2D and 3D educational contents on learning and memory recall using electroencephalography (EEG) brain signals. For this purpose, we adopted a classification approach that predicts true and false memories in case of both short term memory (STM) and long term memory (LTM) and helps to decide whether there is a difference between the impact of 2D and 3D educational contents. In this approach, EEG brain signals are converted into topomaps and then discriminative features are extracted from them and finally support vector machine (SVM) which is employed to predict brain states. For data collection, half of sixty-eight healthy individuals watched the learning material in 2D format whereas the rest watched the same material in 3D format. After learning task, memory recall tasks were performed after 30 minutes (STM) and two months (LTM), and EEG signals were recorded. In case of STM, 97.5% prediction accuracy was achieved for 3D and 96.6% for 2D and, in case of LTM, it was 100% for both 2D and 3D. The statistical analysis of the results suggested that for learning and memory recall both 2D and 3D materials do not have much difference in case of STM and LTM. PMID:26819593

  20. Exploiting content-based networking for fine granularity multireceiver video streaming

    NASA Astrophysics Data System (ADS)

    Eide, Viktor S. W.; Eliassen, Frank; Michaelsen, Jørgen A.

    2005-01-01

    Efficient delivery of video data over computer networks has been studied extensively for decades. Still, multi-receiver video delivery represents a challenge. The challenge is complicated by heterogeneity in network availability, end node capabilities, and receiver preferences. This paper demonstrates that content-based networking is a promising technology for efficient multi-receiver video streaming. The contribution of this work is the bridging of content-based networking with techniques from the fields of video compression and streaming. In the presented approach, each video receiver is provided with fine grained selectivity along different video quality dimensions, such as region of interest, signal to noise ratio, colors, and temporal resolution. Efficient delivery, in terms of network utilization and end node processing requirements, is maintained. A prototype is implemented in the Java programming language and the software is available as open source. Experimental results are presented which demonstrate the feasibility of our approach.

  1. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  2. Exploiting content-based networking for fine granularity multireceiver video streaming

    NASA Astrophysics Data System (ADS)

    Eide, Viktor S. W.; Eliassen, Frank; Michaelsen, Jørgen A.

    2004-12-01

    Efficient delivery of video data over computer networks has been studied extensively for decades. Still, multi-receiver video delivery represents a challenge. The challenge is complicated by heterogeneity in network availability, end node capabilities, and receiver preferences. This paper demonstrates that content-based networking is a promising technology for efficient multi-receiver video streaming. The contribution of this work is the bridging of content-based networking with techniques from the fields of video compression and streaming. In the presented approach, each video receiver is provided with fine grained selectivity along different video quality dimensions, such as region of interest, signal to noise ratio, colors, and temporal resolution. Efficient delivery, in terms of network utilization and end node processing requirements, is maintained. A prototype is implemented in the Java programming language and the software is available as open source. Experimental results are presented which demonstrate the feasibility of our approach.

  3. Evaluation of educational content of YouTube videos relating to neurogenic bladder and intermittent catheterization

    PubMed Central

    Ho, Matthew; Stothers, Lynn; Lazare, Darren; Tsang, Brian; Macnab, Andrew

    2015-01-01

    Introduction: Many patients conduct internet searches to manage their own health problems, to decide if they need professional help, and to corroborate information given in a clinical encounter. Good information can improve patients’ understanding of their condition and their self-efficacy. Patients with spinal cord injury (SCI) featuring neurogenic bladder (NB) require knowledge and skills related to their condition and need for intermittent catheterization (IC). Methods: Information quality was evaluated in videos accessed via YouTube relating to NB and IC using search terms “neurogenic bladder intermittent catheter” and “spinal cord injury intermittent catheter.” Video content was independently rated by 3 investigators using criteria based on European Urological Association (EAU) guidelines and established clinical practice. Results: In total, 71 videos met the inclusion criteria. Of these, 12 (17%) addressed IC and 50 (70%) contained information on NB. The remaining videos met inclusion criteria, but did not contain information relevant to either IC or NB. Analysis indicated poor overall quality of information, with some videos with information contradictory to EAU guidelines for IC. High-quality videos were randomly distributed by YouTube. IC videos featuring a healthcare narrator scored significantly higher than patient-narrated videos, but not higher than videos with a merchant narrator. About half of the videos contained commercial content. Conclusions: Some good-quality educational videos about NB and IC are available on YouTube, but most are poor. The videos deemed good quality were not prominently ranked by the YouTube search algorithm, consequently user access is less likely. Study limitations include the limit of 50 videos per category and the use of a de novo rating tool. Information quality in videos with healthcare narrators was not higher than in those featuring merchant narrators. Better material is required to improve patients

  4. Levels of Interaction and Proximity: Content Analysis of Video-Based Classroom Cases

    ERIC Educational Resources Information Center

    Kale, Ugur

    2008-01-01

    This study employed content analysis techniques to examine video-based cases of two websites that exemplify learner-centered pedagogies for pre-service teachers to carry out in their teaching practices. The study focused on interaction types and physical proximity levels between students and teachers observed in the videos. The findings regarding…

  5. Content-weighted video quality assessment using a three-component image model

    NASA Astrophysics Data System (ADS)

    Li, Chaofeng; Bovik, Alan Conrad

    2010-01-01

    Objective image and video quality measures play important roles in numerous image and video processing applications. In this work, we propose a new content-weighted method for full-reference (FR) video quality assessment using a three-component image model. Using the idea that different image regions have different perceptual significance relative to quality, we deploy a model that classifies image local regions according to their image gradient properties, then apply variable weights to structural similarity image index (SSIM) [and peak signal-to-noise ratio (PSNR)] scores according to region. A frame-based video quality assessment algorithm is thereby derived. Experimental results on the Video Quality Experts Group (VQEG) FR-TV Phase 1 test dataset show that the proposed algorithm outperforms existing video quality assessment methods.

  6. Video quality assessment using content-weighted spatial and temporal pooling method

    NASA Astrophysics Data System (ADS)

    Li, Chaofeng; Pan, Feng; Wu, Xiaojun; Ju, Yiwen; Yuan, Yun-Hao; Fang, Wei

    2015-09-01

    Video quality assessment plays an important role in video processing and communication applications. We propose a full reference video quality metric by combining a content-weighted spatial pooling strategy with a temporal pooling strategy. All pixels in a frame are classified into edge, texture, and smooth regions, and their structural similarity image index (SSIM) maps are divided into increasing and saturated regions by the curve of their SSIM values, then a content weight method is applied to increasing regions to get the score of an image frame. Finally, a temporal pooling method is used to get the overall video quality. Experimental results on the LIVE and IVP video quality databases show our proposed method works well in matching subjective scores.

  7. Enhancing Secondary Science Content Accessibility with Video Games

    ERIC Educational Resources Information Center

    Marino, Matthew T.; Becht, Kathleen M.; Vasquez, Eleazar, III; Gallup, Jennifer L.; Basham, James D.; Gallegos, Benjamin

    2014-01-01

    Mobile devices, including iPads, tablets, and so on, are common in high schools across the country. Unfortunately, many secondary teachers see these devices as distractions rather than tools for scaffolding instruction. This article highlights current research related to the use of video games as a means to increase the cognitive and social…

  8. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. Scene change detection and content-based sampling of video sequences

    NASA Astrophysics Data System (ADS)

    Shahraray, Behzad

    1995-04-01

    Digital images and image sequences (video) are a significant component of multimedia information systems, and by far the most demanding in terms of storage and transmission requirements. Content-based temporal sampling of video frames is proposed as an efficient method for representing the visual information contained in the video sequence by using only a small subset of the video frames. This involves the identification and retention of frames at which the contents of the scene is `significantly' different from the previously retained frames. It is argued that the criteria used to measure the significance of a change in the contents of the video frames are subjective, and performing the task of content-based sampling of image sequences, in general, requires a high level of image understanding. However, a significant subset of the points at which the contextual information in the video frames change significantly can be detected by a `scene change detection' method. The definition of a scene change is generalized to include not only the abrupt transitions between shots, but also gradual transitions between shots resulting from video editing modes, and inter-shot changes induced by camera operations. A method for detecting abrupt and gradual scene changes is discussed. The criteria for detecting camera-induced scene changes from camera operations are proposed. Scene matching is proposed as a means of achieving further reductions in the storage and transmission requirements.

  11. Software architecture as a freedom for 3D content providers and users along with independency on purposes and used devices

    NASA Astrophysics Data System (ADS)

    Sultana, Razia; Christ, Andreas; Meyrueis, Patrick

    2014-05-01

    The improvements in the hardware and software of communication devices have allowed running Virtual Reality (VR) and Augmented Reality (AR) applications on those. Nowadays, it is possible to overlay synthetic information on real images, or even to play 3D on-line games on smart phones or some other mobile devices. Hence the use of 3D data for business and specially for education purposes is ubiquitous. Due to always available at hand and always ready to use properties of mobile phones, those are considered as most potential communication devices. The total numbers of mobile phone users are increasing all over the world every day and that makes mobile phones the most suitable device to reach a huge number of end clients either for education or for business purposes. There are different standards, protocols and specifications to establish the communication among different communication devices but there is no initiative taken so far to make it sure that the send data through this communication process will be understood and used by the destination device. Since all the devices are not able to deal with all kind of 3D data formats and it is also not realistic to have different version of the same data to make it compatible with the destination device, it is necessary to have a prevalent solution. The proposed architecture in this paper describes a device and purpose independent 3D data visibility any time anywhere to the right person in suitable format. There is no solution without limitation. The architecture is implemented in a prototype to make an experimental validation of the architecture which also shows the difference between theory and practice.

  12. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    NASA Astrophysics Data System (ADS)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  13. Application of MPEG-7 descriptors for content-based indexing of sports videos

    NASA Astrophysics Data System (ADS)

    Hoeynck, Michael; Auweiler, Thorsten; Ohm, Jens-Rainer

    2003-06-01

    The amount of multimedia data available worldwide is increasing every day. There is a vital need to annotate multimedia data in order to allow universal content access and to provide content-based search-and-retrieval functionalities. Since supervised video annotation can be time consuming, an automatic solution is appreciated. We review recent approaches to content-based indexing and annotation of videos for different kind of sports, and present our application for the automatic annotation of equestrian sports videos. Thereby, we especially concentrate on MPEG-7 based feature extraction and content description. We apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information and taking specific domain knowledge into account. Having determined single shot positions as well as the visual highlights, the information is jointly stored together with additional textual information in an MPEG-7 description scheme. Using this information, we generate content summaries which can be utilized in a user front-end in order to provide content-based access to the video stream, but further content-based queries and navigation on a video-on-demand streaming server.

  14. Adolescents’ exposure to tobacco and alcohol content in YouTube music videos

    PubMed Central

    Murray, Rachael; Lewis, Sarah; Leonardi‐Bee, Jo; Dockrell, Martin; Britton, John

    2015-01-01

    Abstract Aims To quantify tobacco and alcohol content, including branding, in popular contemporary YouTube music videos; and measure adolescent exposure to such content. Design Ten‐second interval content analysis of alcohol, tobacco or electronic cigarette imagery in all UK Top 40 YouTube music videos during a 12‐week period in 2013/14; on‐line national survey of adolescent viewing of the 32 most popular high‐content videos. Setting Great Britain. Participants A total of 2068 adolescents aged 11–18 years who completed an on‐line survey. Measurements Occurrence of alcohol, tobacco and electronic cigarette use, implied use, paraphernalia or branding in music videos and proportions and estimated numbers of adolescents who had watched sampled videos. Findings Alcohol imagery appeared in 45% [95% confidence interval (CI) = 33–51%] of all videos, tobacco in 22% (95% CI = 13–27%) and electronic cigarettes in 2% (95% CI = 0–4%). Alcohol branding appeared in 7% (95% CI = 2–11%) of videos, tobacco branding in 4% (95% CI = 0–7%) and electronic cigarettes in 1% (95% CI = 0–3%). The most frequently observed alcohol, tobacco and electronic cigarette brands were, respectively, Absolut Tune, Marlboro and E‐Lites. At least one of the 32 most popular music videos containing alcohol or tobacco content had been seen by 81% (95% CI = 79%, 83%) of adolescents surveyed, and of these 87% (95% CI = 85%, 89%) had re‐watched at least one video. The average number of videos seen was 7.1 (95% CI = 6.8, 7.4). Girls were more likely to watch and also re‐watch the videos than boys, P < 0.001. Conclusions Popular YouTube music videos watched by a large number of British adolescents, particularly girls, include significant tobacco and alcohol content, including branding. PMID:25516167

  15. Semi-automated query construction for content-based endomicroscopy video retrieval.

    PubMed

    Tafreshi, Marzieh Kohandani; Linard, Nicolas; André, Barbara; Ayache, Nicholas; Vercauteren, Tom

    2014-01-01

    Content-based video retrieval has shown promising results to help physicians in their interpretation of medical videos in general and endomicroscopic ones in particular. Defining a relevant query for CBVR can however be a complex and time-consuming task for non-expert and even expert users. Indeed, uncut endomicroscopy videos may very well contain images corresponding to a variety of different tissue types. Using such uncut videos as queries may lead to drastic performance degradations for the system. In this study, we propose a semi-automated methodology that allows the physician to create meaningful and relevant queries in a simple and efficient manner. We believe that this will lead to more reproducible and more consistent results. The validation of our method is divided into two approaches. The first one is an indirect validation based on per video classification results with histopathological ground-truth. The second one is more direct and relies on perceived inter-video visual similarity ground-truth. We demonstrate that our proposed method significantly outperforms the approach with uncut videos and approaches the performance of a tedious manual query construction by an expert. Finally, we show that the similarity perceived between videos by experts is significantly correlated with the inter-video similarity distance computed by our retrieval system. PMID:25333105

  16. Dislocation Content Measured Via 3D HR-EBSD Near a Grain Boundary in an AlCu Oligocrystal

    NASA Technical Reports Server (NTRS)

    Ruggles, Timothy; Hochhalter, Jacob; Homer, Eric

    2016-01-01

    Interactions between dislocations and grain boundaries are poorly understood and crucial to mesoscale plasticity modeling. Much of our understanding of dislocation-grain boundary interaction comes from atomistic simulations and TEM studies, both of which are extremely limited in scale. High angular resolution EBSD-based continuum dislocation microscopy provides a way of measuring dislocation activity at length scales and accuracies relevant to crystal plasticity, but it is limited as a two-dimensional technique, meaning the character of the grain boundary and the complete dislocation activity is difficult to recover. However, the commercialization of plasma FIB dual-beam microscopes have made 3D EBSD studies all the more feasible. The objective of this work is to apply high angular resolution cross correlation EBSD to a 3D EBSD data set collected by serial sectioning in a FIB to characterize dislocation interaction with a grain boundary. Three dimensional high angular resolution cross correlation EBSD analysis was applied to an AlCu oligocrystal to measure dislocation densities around a grain boundary. Distortion derivatives associated with the plasma FIB serial sectioning were higher than expected, possibly due to geometric uncertainty between layers. Future work will focus on mitigating the geometric uncertainty and examining more regions of interest along the grain boundary to glean information on dislocation-grain boundary interaction.

  17. The Impact of Rock Videos and Music with Suicidal Content on Thoughts and Attitudes about Suicide.

    ERIC Educational Resources Information Center

    Rustad, Robin A.; Small, Jacob E.; Jobes, David A.; Safer, Martin A.; Peterson, Rebecca J.

    2003-01-01

    Two experiments exposed college student volunteers to rock music with or without suicidal content. Music and videos with suicide content appeared to prime implicit cognitions related to suicide but did not affect variables associated with increased suicide risk. (Contains 60 references and 3 tables.) (Author/JBJ)

  18. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  19. Blind summarization: content-adaptive video summarization using time-series analysis

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Radhakrishnan, Regunathan; Peker, Kadir A.

    2006-01-01

    Severe complexity constraints on consumer electronic devices motivate us to investigate general-purpose video summarization techniques that are able to apply a common hardware setup to multiple content genres. On the other hand, we know that high quality summaries can only be produced with domain-specific processing. In this paper, we present a time-series analysis based video summarization technique that provides a general core to which we are able to add small content-specific extensions for each genre. The proposed time-series analysis technique consists of unsupervised clustering of samples taken through sliding windows from the time series of features obtained from the content. We classify content into two broad categories, scripted content such as news and drama, and unscripted content such as sports and surveillance. The summarization problem then reduces to finding either finding semantic boundaries of the scripted content or detecting highlights in the unscripted content. The proposed technique is essentially an event detection technique and is thus best suited to unscripted content, however, we also find applications to scripted content. We thoroughly examine the trade-off between content-neutral and content-specific processing for effective summarization for a number of genres, and find that our core technique enables us to minimize the complexity of the content-specific processing and to postpone it to the final stage. We achieve the best results with unscripted content such as sports and surveillance video in terms of quality of summaries and minimizing content-specific processing. For other genres such as drama, we find that more content-specific processing is required. We also find that judicious choice of key audio-visual object detectors enables us to minimize the complexity of the content-specific processing while maintaining its applicability to a broad range of genres. We will present a demonstration of our proposed technique at the conference.

  20. Know your data: understanding implicit usage versus explicit action in video content classification

    NASA Astrophysics Data System (ADS)

    Yew, Jude; Shamma, David A.

    2011-02-01

    In this paper, we present a method for video category classification using only social metadata from websites like YouTube. In place of content analysis, we utilize communicative and social contexts surrounding videos as a means to determine a categorical genre, e.g. Comedy, Music. We hypothesize that video clips belonging to different genre categories would have distinct signatures and patterns that are reflected in their collected metadata. In particular, we define and describe social metadata as usage or action to aid in classification. We trained a Naive Bayes classifier to predict categories from a sample of 1,740 YouTube videos representing the top five genre categories. Using just a small number of the available metadata features, we compare the classifications produced by our Naive Bayes classifier with those provided by the uploader of that particular video. Compared to random predictions with the YouTube data (21% accurate), our classifier attained a mediocre 33% accuracy in predicting video genres. However, we found that the accuracy of our classifier significantly improves by nominal factoring of the explicit data features. By factoring the ratings of the videos in the dataset, the classifier was able to accurately predict the genres of 75% of the videos. We argue that the patterns of social activity found in the metadata are not just meaningful in their own right, but are indicative of the meaning of the shared video content. The results presented by this project represents a first step in investigating the potential meaning and significance of social metadata and its relation to the media experience.

  1. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  2. A content-based retrieval system for UAV-like video and associated metadata

    NASA Astrophysics Data System (ADS)

    O'Connor, N. E.; Duffy, T.; Ferguson, P.; Gurrin, C.; Lee, H.; Sadlier, D. A.; Smeaton, A. F.; Zhang, K.

    2008-04-01

    In this paper we provide an overview of a content-based retrieval (CBR) system that has been specifically designed for handling UAV video and associated meta-data. Our emphasis in designing this system is on managing large quantities of such information and providing intuitive and efficient access mechanisms to this content, rather than on analysis of the video content. The retrieval unit in our system is termed a "trip". At capture time, each trip consists of an MPEG-1 video stream and a set of time stamped GPS locations. An analysis process automatically selects and associates GPS locations with the video timeline. The indexed trip is then stored in a shared trip repository. The repository forms the backend of a MPEG-211 compliant Web 2.0 application for subsequent querying, browsing, annotation and video playback. The system interface allows users to search/browse across the entire archive of trips and, depending on their access rights, to annotate other users' trips with additional information. Interaction with the CBR system is via a novel interactive map-based interface. This interface supports content access by time, date, region of interest on the map, previously annotated specific locations of interest and combinations of these. To develop such a system and investigate its practical usefulness in real world scenarios, clearly a significant amount of appropriate data is required. In the absence of a large volume of UAV data with which to work, we have simulated UAV-like data using GPS tagged video content captured from moving vehicles.

  3. An investigation of smoking cessation video content on YouTube.

    PubMed

    Richardson, Chris G; Vettese, Lisa; Sussman, Steve; Small, Sandra P; Selby, Peter

    2011-01-01

    This study examines smoking cessation content posted on youtube.com. The search terms "quit smoking" and "stop smoking" yielded 2,250 videos in October 2007. We examined the top 100 as well as 20 randomly selected videos. Of these, 82 were directly relevant to smoking cessation. Fifty-one were commercial productions that included antismoking messages and advertisements for hypnosis and NicoBloc fluid. Thirty-one were personally produced videos that described personal experiences with quitting, negative health effects, and advice on how to quit. Although smoking cessation content is being shared on YouTube, very little is based on strategies that have been shown to be effective. PMID:21599505

  4. Practical detection of spammers and content promoters in online video sharing systems.

    PubMed

    Benevenuto, Fabrício; Rodrigues, Tiago; Veloso, Adriano; Almeida, Jussara; Gonçalves, Marcos; Almeida, Virgílio

    2012-06-01

    A number of online video sharing systems, out of which YouTube is the most popular, provide features that allow users to post a video as a response to a discussion topic. These features open opportunities for users to introduce polluted content, or simply pollution, into the system. For instance, spammers may post an unrelated video as response to a popular one, aiming at increasing the likelihood of the response being viewed by a larger number of users. Moreover, content promoters may try to gain visibility to a specific video by posting a large number of (potentially unrelated) responses to boost the rank of the responded video, making it appear in the top lists maintained by the system. Content pollution may jeopardize the trust of users on the system, thus compromising its success in promoting social interactions. In spite of that, the available literature is very limited in providing a deep understanding of this problem. In this paper, we address the issue of detecting video spammers and promoters. Towards that end, we first manually build a test collection of real YouTube users, classifying them as spammers, promoters, and legitimate users. Using our test collection, we provide a characterization of content, individual, and social attributes that help distinguish each user class. We then investigate the feasibility of using supervised classification algorithms to automatically detect spammers and promoters, and assess their effectiveness in our test collection. While our classification approach succeeds at separating spammers and promoters from legitimate users, the high cost of manually labeling vast amounts of examples compromises its full potential in realistic scenarios. For this reason, we further propose an active learning approach that automatically chooses a set of examples to label, which is likely to provide the highest amount of information, drastically reducing the amount of required training data while maintaining comparable classification

  5. 3D SMoSIFT: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos

    NASA Astrophysics Data System (ADS)

    Wan, Jun; Ruan, Qiuqi; Li, Wei; An, Gaoyun; Zhao, Ruizhen

    2014-03-01

    Human activity recognition based on RGB-D data has received more attention in recent years. We propose a spatiotemporal feature named three-dimensional (3D) sparse motion scale-invariant feature transform (SIFT) from RGB-D data for activity recognition. First, we build pyramids as scale space for each RGB and depth frame, and then use Shi-Tomasi corner detector and sparse optical flow to quickly detect and track robust keypoints around the motion pattern in the scale space. Subsequently, local patches around keypoints, which are extracted from RGB-D data, are used to build 3D gradient and motion spaces. Then SIFT-like descriptors are calculated on both 3D spaces, respectively. The proposed feature is invariant to scale, transition, and partial occlusions. More importantly, the running time of the proposed feature is fast so that it is well-suited for real-time applications. We have evaluated the proposed feature under a bag of words model on three public RGB-D datasets: one-shot learning Chalearn Gesture Dataset, Cornell Activity Dataset-60, and MSR Daily Activity 3D dataset. Experimental results show that the proposed feature outperforms other spatiotemporal features and are comparative to other state-of-the-art approaches, even though there is only one training sample for each class.

  6. A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century

    NASA Astrophysics Data System (ADS)

    Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed

    2014-12-01

    In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.

  7. Video content analysis on body-worn cameras for retrospective investigation

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  8. Multiple Adaptations and Content-Adaptive FEC Using Parameterized RD Model for Embedded Wavelet Video

    NASA Astrophysics Data System (ADS)

    Yu, Ya-Huei; Ho, Chien-Peng; Tsai, Chun-Jen

    2007-12-01

    Scalable video coding (SVC) has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.

  9. 3D printing of soft and wet systems benefit from hard-to-soft transition of transparent shape memory gels (presentation video)

    NASA Astrophysics Data System (ADS)

    Furukawa, Hidemitsu; Gong, Jin; Makino, Masato; Kabir, Md. Hasnat

    2014-04-01

    Recently we successfully developed novel transparent shape memory gels. The SMG memorize their original shapes during the gelation process. In the room temperature, the SMG are elastic and show plasticity (yielding) under deformation. However when heated above about 50˚C, the SMG induce hard-to-soft transition and go back to their original shapes automatically. We focus on new soft and wet systems made of the SMG by 3-D printing technology.

  10. A content analysis of smoking fetish videos on YouTube: regulatory implications for tobacco control.

    PubMed

    Kim, Kyongseok; Paek, Hye-Jin; Lynn, Jordan

    2010-03-01

    This study examined the prevalence, accessibility, and characteristics of eroticized smoking portrayal, also referred to as smoking fetish, on YouTube. The analysis of 200 smoking fetish videos revealed that the smoking fetish videos are prevalent and accessible to adolescents on the website. They featured explicit smoking behavior by sexy, young, and healthy females, with the content corresponding to PG-13 and R movie ratings. We discuss a potential impact of the prosmoking image on youth according to social cognitive theory, and implications for tobacco control. PMID:20390676

  11. Is content really king? An objective analysis of the public's response to medical videos on YouTube.

    PubMed

    Desai, Tejas; Shariff, Afreen; Dhingra, Vibhu; Minhas, Deeba; Eure, Megan; Kats, Mark

    2013-01-01

    Medical educators and patients are turning to YouTube to teach and learn about medical conditions. These videos are from authors whose credibility cannot be verified & are not peer reviewed. As a result, studies that have analyzed the educational content of YouTube have reported dismal results. These studies have been unable to exclude videos created by questionable sources and for non-educational purposes. We hypothesize that medical education YouTube videos, authored by credible sources, are of high educational value and appropriately suited to educate the public. Credible videos about cardiovascular diseases were identified using the Mayo Clinic's Center for Social Media Health network. Content in each video was assessed by the presence/absence of 7 factors. Each video was also evaluated for understandability using the Suitability Assessment of Materials (SAM). User engagement measurements were obtained for each video. A total of 607 videos (35 hours) were analyzed. Half of all videos contained 3 educational factors: treatment, screening, or prevention. There was no difference between the number of educational factors present & any user engagement measurement (p NS). SAM scores were higher in videos whose content discussed more educational factors (p<0.0001). However, none of the user engagement measurements correlated with higher SAM scores. Videos with greater educational content are more suitable for patient education but unable to engage users more than lower quality videos. It is unclear if the notion "content is king" applies to medical videos authored by credible organizations for the purposes of patient education on YouTube. PMID:24367517

  12. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  13. An Examination of Automatic Video Retrieval Technology on Access to the Contents of an Historical Video Archive

    ERIC Educational Resources Information Center

    Petrelli, Daniela; Auld, Daniel

    2008-01-01

    Purpose: This paper aims to provide an initial understanding of the constraints that historical video collections pose to video retrieval technology and the potential that online access offers to both archive and users. Design/methodology/approach: A small and unique collection of videos on customs and folklore was used as a case study. Multiple…

  14. A fast mode decision algorithm for multiview auto-stereoscopic 3D video coding based on mode and disparity statistic analysis

    NASA Astrophysics Data System (ADS)

    Ding, Cong; Sang, Xinzhu; Zhao, Tianqi; Yan, Binbin; Leng, Junmin; Yuan, Jinhui; Zhang, Ying

    2012-11-01

    Multiview video coding (MVC) is essential for applications of the auto-stereoscopic three-dimensional displays. However, the computational complexity of MVC encoders is tremendously huge. Fast algorithms are very desirable for the practical applications of MVC. Based on joint early termination , the selection of inter-view prediction and the optimization of the process of Inter8×8 modes by comparison, a fast macroblock(MB) mode selection algorithm is presented. Comparing with the full mode decision in MVC, the experimental results show that the proposed algorithm can reduce up to 78.13% on average and maximum 90.21% encoding time with a little increase in bit rates and loss in PSNR.

  15. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  16. Aggression and sexual behavior in best-selling pornography videos: a content analysis update.

    PubMed

    Bridges, Ana J; Wosnitzer, Robert; Scharrer, Erica; Sun, Chyng; Liberman, Rachael

    2010-10-01

    This current study analyzes the content of popular pornographic videos, with the objectives of updating depictions of aggression, degradation, and sexual practices and comparing the study's results to previous content analysis studies. Findings indicate high levels of aggression in pornography in both verbal and physical forms. Of the 304 scenes analyzed, 88.2% contained physical aggression, principally spanking, gagging, and slapping, while 48.7% of scenes contained verbal aggression, primarily name-calling. Perpetrators of aggression were usually male, whereas targets of aggression were overwhelmingly female. Targets most often showed pleasure or responded neutrally to the aggression. PMID:20980228

  17. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch.

    PubMed

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P; Siewerdsen, J H

    2016-04-21

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  18. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    NASA Astrophysics Data System (ADS)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  19. Tissue-plastinated vs. celloidin-embedded large serial sections in video, analog and digital photographic on-screen reproduction: a preliminary step to exact virtual 3D modelling, exemplified in the normal midface and cleft-lip and palate

    PubMed Central

    Landes, Constantin A; Weichert, Frank; Geis, Philipp; Wernstedt, Katrin; Wilde, Anja; Fritsch, Helga; Wagner, Mathias

    2005-01-01

    This study analyses tissue-plastinated vs. celloidin-embedded large serial sections, their inherent artefacts and aptitude with common video, analog or digital photographic on-screen reproduction. Subsequent virtual 3D microanatomical reconstruction will increase our knowledge of normal and pathological microanatomy for cleft-lip-palate (clp) reconstructive surgery. Of 18 fetal (six clp, 12 control) specimens, six randomized specimens (two clp) were BiodurE12-plastinated, sawn, burnished 90 µm thick transversely (five) or frontally (one), stained with azureII/methylene blue, and counterstained with basic-fuchsin (TP-AMF). Twelve remaining specimens (four clp) were celloidin-embedded, microtome-sectioned 75 µm thick transversely (ten) or frontally (two), and stained with haematoxylin–eosin (CE-HE). Computed-planimetry gauged artefacts, structure differentiation was compared with light microscopy on video, analog and digital photography. Total artefact was 0.9% (TP-AMF) and 2.1% (CE-HE); TP-AMF showed higher colour contrast, gamut and luminance, and CE-HE more red contrast, saturation and hue (P < 0.4). All (100%) structures of interest were light microscopically discerned, 83% on video, 76% on analog photography and 98% in digital photography. Computed image analysis assessed the greatest colour contrast, gamut, luminance and saturation on video; the most detailed, colour-balanced and sharpest images were obatined with digital photography (P < 0.02). TP-AMF retained spatial oversight, covered the entire area of interest and should be combined in different specimens with CE-HE which enables more refined muscle fibre reproduction. Digital photography is preferred for on-screen analysis. PMID:16050904

  20. In vitro optimization of EtNBS-PDT against hypoxic tumor environments with a tiered, high-content, 3D model optical screening platform.

    PubMed

    Klein, Oliver J; Bhayana, Brijesh; Park, Yong Jin; Evans, Conor L

    2012-11-01

    Hypoxia and acidosis are widely recognized as major contributors to the development of treatment resistant cancer. For patients with disseminated metastatic lesions, such as most women with ovarian cancer (OvCa), the progression to treatment resistant disease is almost always fatal. Numerous therapeutic approaches have been developed to eliminate treatment resistant carcinoma, including novel biologic, chemo, radiation, and photodynamic therapy (PDT) regimens. Recently, PDT using the cationic photosensitizer EtNBS was found to be highly effective against therapeutically unresponsive hypoxic and acidic OvCa cellular populations in vitro. To optimize this treatment regimen, we developed a tiered, high-content, image-based screening approach utilizing a biologically relevant OvCa 3D culture model to investigate a small library of side-chain modified EtNBS derivatives. The uptake, localization, and photocytotoxicity of these compounds on both the cellular and nodular levels were observed to be largely mediated by their respective ethyl side chain chemical alterations. In particular, EtNBS and its hydroxyl-terminated derivative (EtNBS-OH) were found to have similar pharmacological parameters, such as their nodular localization patterns and uptake kinetics. Interestingly, these two molecules were found to induce dramatically different therapeutic outcomes: EtNBS was found to be more effective in killing the hypoxic, nodule core cells with superior selectivity, while EtNBS-OH was observed to trigger widespread structural degradation of nodules. This breakdown of the tumor architecture can improve the therapeutic outcome and is known to synergistically enhance the antitumor effects of front-line chemotherapeutic regimens. These results, which would not have been predicted or observed using traditional monolayer or in vivo animal screening techniques, demonstrate the powerful capabilities of 3D in vitro screening approaches for the selection and optimization of therapeutic

  1. Dynamic heterogeneity of DNA methylation and hydroxymethylation in embryonic stem cell populations captured by single-cell 3D high-content analysis

    SciTech Connect

    Tajbakhsh, Jian; Stefanovski, Darko; Tang, George; Wawrowsky, Kolja; Liu, Naiyou; Fair, Jeffrey H.

    2015-03-15

    Cell-surface markers and transcription factors are being used in the assessment of stem cell fate and therapeutic safety, but display significant variability in stem cell cultures. We assessed nuclear patterns of 5-hydroxymethylcytosine (5hmC, associated with pluripotency), a second important epigenetic mark, and its combination with 5-methylcytosine (5mC, associated with differentiation), also in comparison to more established markers of pluripotency (Oct-4) and endodermal differentiation (FoxA2, Sox17) in mouse embryonic stem cells (mESC) over a 10-day differentiation course in vitro: by means of confocal and super-resolution imaging together with 3D high-content analysis, an essential tool in single-cell screening. In summary: 1) We did not measure any significant correlation of putative markers with global 5mC or 5hmC. 2) While average Oct-4 levels stagnated on a cell-population base (0.015 lnIU/day), Sox17 and FoxA2 increased 22-fold and 3-fold faster, respectively (Sox17: 0.343 lnIU/day; FoxA2: 0.046 lnIU/day). In comparison, global DNA methylation levels increased 4-fold faster (0.068 lnIU/day), and global hydroxymethylation declined at 0.046 lnIU/day, both with a better explanation of the temporal profile. 3) This progression was concomitant with the occurrence of distinct nuclear codistribution patterns that represented a heterogeneous spectrum of states in differentiation; converging to three major coexisting 5mC/5hmC phenotypes by day 10: 5hmC{sup +}/5mC{sup −}, 5hmC{sup +}/5mC{sup +}, and 5hmC{sup −}/5mC{sup +} cells. 4) Using optical nanoscopy we could delineate the respective topologies of 5mC/5hmC colocalization in subregions of nuclear DNA: in the majority of 5hmC{sup +}/5mC{sup +} cells 5hmC and 5mC predominantly occupied mutually exclusive territories resembling euchromatic and heterochromatic regions, respectively. Simultaneously, in a smaller subset of cells we observed a tighter colocalization of the two cytosine variants, presumably

  2. 2D to 3D conversion implemented in different hardware

    NASA Astrophysics Data System (ADS)

    Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli

    2015-02-01

    Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.

  3. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  4. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  5. Through the looking glass of a chemistry video game: Evaluating the effects of different MLEs presenting identical content material

    NASA Astrophysics Data System (ADS)

    Hillman, Dustin S.

    The primary goal of this study is to evaluate the effects of different media-based learning environments (MLEs) that present identical chemistry content material. This is done with four different MLEs that utilize some or all components of a chemistry-based media-based prototype video game. Examination of general chemistry student volunteers purposefully randomized to one of four different MLEs did not provide evidence that the higher the level of interactivity resulted in a more effective MLE for the chemistry content. Data suggested that the cognitive load to play the chemistry-based video game may impaired the chemistry content being presented and recalled by the students while the students watching the movie of the chemistry-based video game were able to recall the chemistry content more efficiently. Further studies in this area need to address the overall cognitive load of the different MLEs to potentially better determine what the most effective MLE may be for this chemistry content.

  6. Gender (In)equality in Internet Pornography: A Content Analysis of Popular Pornographic Internet Videos.

    PubMed

    Klaassen, Marleen J E; Peter, Jochen

    2015-01-01

    Although Internet pornography is widely consumed and researchers have started to investigate its effects, we still know little about its content. This has resulted in contrasting claims about whether Internet pornography depicts gender (in)equality and whether this depiction differs between amateur and professional pornography. We conducted a content analysis of three main dimensions of gender (in)equality (i.e., objectification, power, and violence) in 400 popular pornographic Internet videos from the most visited pornographic Web sites. Objectification was depicted more often for women through instrumentality, but men were more frequently objectified through dehumanization. Regarding power, men and women did not differ in social or professional status, but men were more often shown as dominant and women as submissive during sexual activities. Except for spanking and gagging, violence occurred rather infrequently. Nonconsensual sex was also relatively rare. Overall, amateur pornography contained more gender inequality at the expense of women than professional pornography did. PMID:25420868

  7. iTVP: large-scale content distribution for live and on-demand video services

    NASA Astrophysics Data System (ADS)

    Kusmierek, Ewa; Czyrnek, Miroslaw; Mazurek, Cezary; Stroinski, Maciej

    2007-01-01

    iTVP is a system built for IP-based delivery of live TV programming, video-on-demand and audio-on-demand with interactive access over IP networks. It has a country-wide range and is designed to provide service to a high number of concurrent users. iTVP prototype contains the backbone of a two-level hierarchical system designed for distribution of multimedia content from a content provider to end users. In this paper we present experience gained during a few months of the prototype operation. We analyze efficiency of iTVP content distribution system and resource usage at various levels of the hierarchy. We also characterize content access patterns and their influence on system performance, as well as quality experienced by users and user behavior. In our investigation, scalability is one of the most important aspects of the system performance evaluation. Although the range of the prototype operation is limited, as far as the number of users and the content repository is concerned, we believe that data collected from such a large scale operational system provides a valuable insight into efficiency of a CDN-type of solution to large scale streaming services. We find that the systems exhibits good performance and low resource usage.

  8. Preschoolers' Recall of Science Content From Educational Videos Presented With and Without Songs

    NASA Astrophysics Data System (ADS)

    Schechter, Rachel L.

    This experimental investigation evaluated the impact of educational songs on a child's ability to recall scientific content from an educational television program. Preschoolers' comprehension of the educational content was examined by measuring children's ability to recall the featured science content (the function of a pulley and its parts) and their use of the precise scientific terms presented in the episode. A total of 91 preschoolers were included (3-5 years old). Clusters of children were randomly assigned to a control group or one of three video groups: (a) Dialogue Only, which did not include a song; (b) Dialogue Plus Lyrics, which included a song; or (c) Lyrics Only, which consisted of a song, played twice. Results from interviews suggested that children from all video groups (lyrics and/or dialogue) were able to explain the form and function of a pulley better than the control group. The data suggested that children from the Lyrics Only group understood the science content because of the visual imagery, not through the information provided in the lyrics. In terms of precise vocabulary terms, significantly more children in the Dialogue Only group recalled at least one precise term from the program compared to the Lyrics Only group. Looking at the interview as a whole, the children's responses suggested different levels of scientific understanding. Children would require additional teacher-led instruction to deepen their scientific understanding and to clarify any misconceptions. This paper discusses implications of these findings for teachers using multi-media tools in the science classroom and producers creating new educational programming for television and other platforms.

  9. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care. PMID:25620087

  10. Automated 3D reconstruction of interiors with multiple scan views

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.

    1998-12-01

    This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.

  11. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  12. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  13. GPM 3D Flyby Video of Lester

    NASA Video Gallery

    On Aug. 25, GPM found rain was falling at a rate of over 54 mm (2.1 inches) per hour in rain bands east of Lester's center. Cloud top heights were reaching about 12km (7.4 miles) in the tallest sto...

  14. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  15. Alcohol and Tobacco Content in UK Video Games and Their Association with Alcohol and Tobacco Use Among Young People.

    PubMed

    Cranwell, Jo; Whittamore, Kathy; Britton, John; Leonardi-Bee, Jo

    2016-07-01

    To determine the extent to which video games include alcohol and tobacco content and assess the association between playing them and alcohol and smoking behaviors in adolescent players in Great Britain. Assessment of substance in the 32 UK bestselling video games of 2012/2013; online survey of adolescent playing of 17 games with substance content; and content analysis of the five most popular games. A total of 1,094 adolescents aged 11-17 years were included as participants. Reported presence of substance content in the 32 games; estimated numbers of adolescents who had played games; self-reported substance use; semiquantitative measures of substance content by interval coding of video game cut scenes. Nonofficial sources reported substance content in 17 (44 percent) games but none was reported by the official Pan European Game Information (PEGI) system. Adolescents who had played at least one game were significantly more likely ever to have tried smoking (adjusted odds ratio [OR] 2.70, 95 percent confidence interval [CI] 1.75-4.17) or consumed alcohol (adjusted OR 2.35, 95 percent CI 1.70-3.23). In the five most popular game episodes of alcohol actual use, implied use and paraphernalia occurred in 31 (14 percent), 81 (37 percent), and 41 (19 percent) intervals, respectively. Tobacco actual use, implied use, and paraphernalia occurred in 32 (15 percent), 27 (12 percent), and 53 (24 percent) intervals, respectively. Alcohol and tobacco content is common in the most popular video games but not reported by the official PEGI system. Content analysis identified substantial substance content in a sample of those games. Adolescents who play these video games are more likely to have experimented with tobacco and alcohol. PMID:27428030

  16. Alcohol and Tobacco Content in UK Video Games and Their Association with Alcohol and Tobacco Use Among Young People

    PubMed Central

    Whittamore, Kathy; Britton, John; Leonardi-Bee, Jo

    2016-01-01

    Abstract To determine the extent to which video games include alcohol and tobacco content and assess the association between playing them and alcohol and smoking behaviors in adolescent players in Great Britain. Assessment of substance in the 32 UK bestselling video games of 2012/2013; online survey of adolescent playing of 17 games with substance content; and content analysis of the five most popular games. A total of 1,094 adolescents aged 11–17 years were included as participants. Reported presence of substance content in the 32 games; estimated numbers of adolescents who had played games; self-reported substance use; semiquantitative measures of substance content by interval coding of video game cut scenes. Nonofficial sources reported substance content in 17 (44 percent) games but none was reported by the official Pan European Game Information (PEGI) system. Adolescents who had played at least one game were significantly more likely ever to have tried smoking (adjusted odds ratio [OR] 2.70, 95 percent confidence interval [CI] 1.75–4.17) or consumed alcohol (adjusted OR 2.35, 95 percent CI 1.70–3.23). In the five most popular game episodes of alcohol actual use, implied use and paraphernalia occurred in 31 (14 percent), 81 (37 percent), and 41 (19 percent) intervals, respectively. Tobacco actual use, implied use, and paraphernalia occurred in 32 (15 percent), 27 (12 percent), and 53 (24 percent) intervals, respectively. Alcohol and tobacco content is common in the most popular video games but not reported by the official PEGI system. Content analysis identified substantial substance content in a sample of those games. Adolescents who play these video games are more likely to have experimented with tobacco and alcohol. PMID:27428030

  17. Priority depth fusion for the 2D to 3D conversion system

    NASA Astrophysics Data System (ADS)

    Chang, Yu-Lin; Chen, Wei-Yin; Chang, Jing-Ying; Tsai, Yi-Min; Lee, Chia-Lin; Chen, Liang-Gee

    2008-02-01

    For the sake of providing 3D contents for up-coming 3D display devices, a real-time automatic depth fusion 2D-to-3D conversion system is needed on the home multimedia platform. We proposed a priority depth fusion algorithm with a 2D-to-3D conversion system which generates the depth map from most of the commercial video sequences. The results from different kinds of depth reconstruction methods are integrated into one depth map by the proposed priority depth fusion algorithm. Then the depth map and the original 2D image are converted to stereo images for showing on the 3D display devices. In this paper, a 2D-to-3D conversion algorithm set is combined with the proposed depth fusion algorithm to show the improved results. With the converted 3D contents, the needs for 3D display devices will also increase. As long as the two technologies evolve, the 3D-TV era will come as soon as possible.

  18. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  19. Prevalence of Behavior Changing Strategies in Fitness Video Games: Theory-Based Content Analysis

    PubMed Central

    Hatkevich, Claire

    2013-01-01

    Background Fitness video games are popular, but little is known about their content. Because many contain interactive tools that mimic behavioral strategies from weight loss intervention programs, it is possible that differences in content could affect player physical activity and/or weight outcomes. There is a need for a better understanding of what behavioral strategies are currently available in fitness games and how they are implemented. Objective The purpose of this study was to investigate the prevalence of evidence-based behavioral strategies across fitness video games available for home use. Games available for consoles that used camera-based controllers were also contrasted with games available for a console that used handheld motion controllers. Methods Fitness games (N=18) available for three home consoles were systematically identified and play-tested by 2 trained coders for at least 3 hours each. In cases of multiple games from one series, only the most recently released game was included. The Sony PlayStation 3 and Microsoft Xbox360 were the two camera-based consoles, and the Nintendo Wii was the handheld motion controller console. A coding list based on a taxonomy of behavioral strategies was used to begin coding. Codes were refined in an iterative process based on data found during play-testing. Results The most prevalent behavioral strategies were modeling (17/18), specific performance feedback (17/18), reinforcement (16/18), caloric expenditure feedback (15/18), and guided practice (15/18). All games included some kind of feedback on performance accuracy, exercise frequency, and/or fitness progress. Action planning (scheduling future workouts) was the least prevalent of the included strategies (4/18). Twelve games included some kind of social integration, with nine of them providing options for real-time multiplayer sessions. Only two games did not feature any kind of reward. Games for the camera-based consoles (mean 12.89, SD 2.71) included a

  20. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  1. Good clean fun? A content analysis of profanity in video games and its prevalence across game systems and ratings.

    PubMed

    Ivory, James D; Williams, Dmitri; Martins, Nicole; Consalvo, Mia

    2009-08-01

    Although violent video game content and its effects have been examined extensively by empirical research, verbal aggression in the form of profanity has received less attention. Building on preliminary findings from previous studies, an extensive content analysis of profanity in video games was conducted using a sample of the 150 top-selling video games across all popular game platforms (including home consoles, portable consoles, and personal computers). The frequency of profanity, both in general and across three profanity categories, was measured and compared to games' ratings, sales, and platforms. Generally, profanity was found in about one in five games and appeared primarily in games rated for teenagers or above. Games containing profanity, however, tended to contain it frequently. Profanity was not found to be related to games' sales or platforms. PMID:19514818

  2. The Use of Eye Tracking in Research on Video-Based Second Language (L2) Listening Assessment: A Comparison of Context Videos and Content Videos

    ERIC Educational Resources Information Center

    Suvorov, Ruslan

    2015-01-01

    Investigating how visuals affect test takers' performance on video-based L2 listening tests has been the focus of many recent studies. While most existing research has been based on test scores and self-reported verbal data, few studies have examined test takers' viewing behavior (Ockey, 2007; Wagner, 2007, 2010a). To address this gap, in the…

  3. A content based video retrieval method for surveillance and forensic applications

    NASA Astrophysics Data System (ADS)

    Vadakkeveedu, Kalyan; Xu, Peng; Fernandes, Ronald; Mayer, Richard J.

    2007-04-01

    The advances in video surveillance technology have lead to the proliferation of surveillance video cameras for the purposes of viewing areas of interest. Counter terrorism and surveillance applications require video forensics capabilities like querying and searching video data for events, people or objects of interest. A human analyst may accurately spot a suspicious activity in a small segment of video. However, due to the large volume of data collected in real-time video surveillance, it is impractical for human analysts to watch or tag the entire video collected as this can lead to human errors, lower throughput and inconsistencies in the level of scrutiny. In this paper, we introduce an ontology-based video retrieval approach, which represents videos with object ontologies and event ontologies, and annotates videos accordingly. We also describe a user-friendly interface for querying surveillance videos using event dictionaries. Our approach leverages the capabilities of ontologies in specifying knowledge at different levels, and, in this way, provides flexibility to a user while forming a query. It is also capable of detecting undefined events such as not previously conceived abnormal events.

  4. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  5. High-definition 3D display for training applications

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy

    2010-04-01

    In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.

  6. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  7. Biomedical-grade, high mannuronic acid content (BioMVM) alginate enhances the proteoglycan production of primary human meniscal fibrochondrocytes in a 3-D microenvironment

    PubMed Central

    Rey-Rico, Ana; Klich, Angelique; Cucchiarini, Magali; Madry, Henning

    2016-01-01

    Alginates are important hydrogels for meniscus tissue engineering as they support the meniscal fibrochondrocyte phenotype and proteoglycan production, the extracellular matrix (ECM) component chiefly responsible for its viscoelastic properties. Here, we systematically evaluated four biomedical- and two nonbiomedical-grade alginates for their capacity to provide the best three-dimensional (3-D) microenvironment and to support proteoglycan synthesis of encapsulated human meniscal fibrochondrocytes in vitro. Biomedical-grade, high mannuronic acid alginate spheres (BioLVM, BioMVM) were the most uniform in size, indicating an effect of the purity of alginate on the shape of the spheres. Interestingly, the purity of alginates did not affect cell viability. Of note, only fibrochondrocytes encapsulated in BioMVM alginate produced and retained significant amounts of proteoglycans. Following transplantation in an explant culture model, the alginate spheres containing fibrochondrocytes remained in close proximity with the meniscal tissue adjacent to the defect. The results reveal a promising role of BioMVM alginate to enhance the proteoglycan production of primary human meniscal fibrochondrocytes in a 3-D hydrogel microenvironment. These findings have significant implications for cell-based translational studies aiming at restoring lost meniscal tissue in regions containing high amounts of proteoglycans. PMID:27302206

  8. Longitudinal, Quantitative Monitoring of Therapeutic Response in 3D In Vitro Tumor Models with OCT for High-Content Therapeutic Screening

    PubMed Central

    Klein, O. J.; Jung, Y. K.; Evans, C. L.

    2013-01-01

    In vitro three-dimensional models of cancer have the ability to recapitulate many features of tumors found in vivo, including cell-cell and cell-matrix interactions, microenvironments that become hypoxic and acidic, and other barriers to effective therapy. These model tumors can be large, highly complex, heterogeneous, and undergo time-dependent growth and treatment response processes that are difficult to track and quantify using standard imaging tools. Optical coherence tomography is an optical ranging technique that is ideally suited for visualizing, monitoring, and quantifying the growth and treatment response dynamics occurring in these informative model systems. By optimizing both optical coherence tomography and 3D culture systems, it is possible to continuously and non-perturbatively monitor advanced in vitro models without the use of labels over the course of hours and days. In this article, we describe approaches and methods for creating and carrying out quantitative therapeutic screens with in vitro 3D cultures using optical coherence tomography to gain insights into therapeutic mechanisms and build more effective treatment regimens. PMID:24013042

  9. Content analysis of antismoking videos on YouTube: message sensation value, message appeals, and their relationships with viewer responses.

    PubMed

    Paek, Hye-Jin; Kim, Kyongseok; Hove, Thomas

    2010-12-01

    Focusing on several message features that are prominent in antismoking campaign literature, this content-analytic study examines 934 antismoking video clips on YouTube for the following characteristics: message sensation value (MSV) and three types of message appeal (threat, social and humor). These four characteristics are then linked to YouTube's interactive audience response mechanisms (number of viewers, viewer ratings and number of comments) to capture message reach, viewer preference and viewer engagement. The findings suggest the following: (i) antismoking messages are prevalent on YouTube, (ii) MSV levels of online antismoking videos are relatively low compared with MSV levels of televised antismoking messages, (iii) threat appeals are the videos' predominant message strategy and (iv) message characteristics are related to viewer reach and viewer preference. PMID:20923913

  10. Marking spatial parts within stereoscopic video images

    NASA Astrophysics Data System (ADS)

    Belz, Constance; Boehm, Klaus; Duong, Thanh; Kuehn, Volker; Weber, Martin

    1996-04-01

    The technology of stereoscopic imaging enables reliable online telediagnoses. Applications of telediagnosis include the fields of medicine and in general telerobotics. For allowing the participants in a telediagnosis to mark spatial parts within the stereoscopic video image, graphic tools and automatism have to be provided. The process of marking spatial parts and objects inside a stereoscopic video image is a non trivial interaction technique. The markings themselves have to be 3D elements instead of 2D markings which would lead to an alienated effect `in' the stereoscopic video image. Furthermore, one problem to be tackled here, is that the content of the stereoscopic video image is unknown. This is in contrast to 3D Virtual Reality scenes, which enable an easy 3D interaction because all the objects and their position within the 3D scene are known. The goals of our research comprised the development of new interaction paradigms and marking techniques in stereoscopic video images, as well as an investigation of input devices appropriate for this interaction task. We have implemented these interaction techniques in a test environment and integrated therefore computer graphics into stereoscopic video images. In order to evaluate the new interaction techniques a user test was carried out. The results of our research will be presented here.

  11. Movie Ratings and the Content of Adult Videos: The Sex-Violence Ratio.

    ERIC Educational Resources Information Center

    Yang, Ni; Linz, Daniel

    1990-01-01

    Quantifies sexual, violent, sexually violent, and prosocial behaviors in a sample of R-rated and X-rated videocassettes. Finds the predominant behavior in both X- and XXX-rated videos is sexual. Finds the predominant behavior in R-rated videos was violence followed by prosocial behavior. (RS)

  12. Content-Based Indexing and Teaching Focus Mining for Lecture Videos

    ERIC Educational Resources Information Center

    Lin, Yu-Tzu; Yen, Bai-Jang; Chang, Chia-Hu; Lee, Greg C.; Lin, Yu-Chih

    2010-01-01

    Purpose: The purpose of this paper is to propose an indexing and teaching focus mining system for lecture videos recorded in an unconstrained environment. Design/methodology/approach: By applying the proposed algorithms in this paper, the slide structure can be reconstructed by extracting slide images from the video. Instead of applying…

  13. Obesity in the new media: a content analysis of obesity videos on YouTube.

    PubMed

    Yoo, Jina H; Kim, Junghyun

    2012-01-01

    This study examines (1) how the topics of obesity are framed and (2) how obese persons are portrayed on YouTube video clips. The analysis of 417 obesity videos revealed that a newer medium like YouTube, similar to traditional media, appeared to assign responsibility and solutions for obesity mainly to individuals and their behaviors, although there was a tendency that some video categories have started to show other causal claims or solutions. However, due to the prevailing emphasis on personal causes and solutions, numerous YouTube videos had a theme of weight-based teasing, or showed obese persons engaging in stereotypical eating behaviors. We discuss a potential impact of YouTube videos on shaping viewers' perceptions about obesity and further reinforcing stigmatization of obese persons. PMID:21809934

  14. The Potential of Accelerating Early Detection of Autism through Content Analysis of YouTube Videos

    PubMed Central

    Fusaro, Vincent A.; Daniels, Jena; Duda, Marlena; DeLuca, Todd F.; D’Angelo, Olivia; Tamburello, Jenna; Maniscalco, James; Wall, Dennis P.

    2014-01-01

    Abstract Autism is on the rise, with 1 in 88 children receiving a diagnosis in the United States, yet the process for diagnosis remains cumbersome and time consuming. Research has shown that home videos of children can help increase the accuracy of diagnosis. However the use of videos in the diagnostic process is uncommon. In the present study, we assessed the feasibility of applying a gold-standard diagnostic instrument to brief and unstructured home videos and tested whether video analysis can enable more rapid detection of the core features of autism outside of clinical environments. We collected 100 public videos from YouTube of children ages 1–15 with either a self-reported diagnosis of an ASD (N = 45) or not (N = 55). Four non-clinical raters independently scored all videos using one of the most widely adopted tools for behavioral diagnosis of autism, the Autism Diagnostic Observation Schedule-Generic (ADOS). The classification accuracy was 96.8%, with 94.1% sensitivity and 100% specificity, the inter-rater correlation for the behavioral domains on the ADOS was 0.88, and the diagnoses matched a trained clinician in all but 3 of 22 randomly selected video cases. Despite the diversity of videos and non-clinical raters, our results indicate that it is possible to achieve high classification accuracy, sensitivity, and specificity as well as clinically acceptable inter-rater reliability with nonclinical personnel. Our results also demonstrate the potential for video-based detection of autism in short, unstructured home videos and further suggests that at least a percentage of the effort associated with detection and monitoring of autism may be mobilized and moved outside of traditional clinical environments. PMID:24740236

  15. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  16. FMOE-MR: content-driven multiresolution MPEG-4 fine grained scalable layered video encoding

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, S.; Luo, X.; Bhandarkar, S. M.; Li, K.

    2007-01-01

    The MPEG-4 Fine Grained Scalability (FGS) profile aims at scalable layered video encoding, in order to ensure efficient video streaming in networks with fluctuating bandwidths. In this paper, we propose a novel technique, termed as FMOEMR, which delivers significantly improved rate distortion performance compared to existing MPEG-4 Base Layer encoding techniques. The video frames are re-encoded at high resolution at semantically and visually important regions of the video (termed as Features, Motion and Objects) that are defined using a mask (FMO-Mask) and at low resolution in the remaining regions. The multiple-resolution re-rendering step is implemented such that further MPEG-4 compression leads to low bit rate Base Layer video encoding. The Features, Motion and Objects Encoded-Multi- Resolution (FMOE-MR) scheme is an integrated approach that requires only encoder-side modifications, and is transparent to the decoder. Further, since the FMOE-MR scheme incorporates "smart" video preprocessing, it requires no change in existing MPEG-4 codecs. As a result, it is straightforward to use the proposed FMOE-MR scheme with any existing MPEG codec, thus allowing great flexibility in implementation. In this paper, we have described, and implemented, unsupervised and semi-supervised algorithms to create the FMO-Mask from a given video sequence, using state-of-the-art computer vision algorithms.

  17. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  18. Fast Mode Decision for 3D-HEVC Depth Intracoding

    PubMed Central

    Li, Nana; Wu, Qinggang

    2014-01-01

    The emerging international standard of high efficiency video coding based 3D video coding (3D-HEVC) is a successor to multiview video coding (MVC). In 3D-HEVC depth intracoding, depth modeling mode (DMM) and high efficiency video coding (HEVC) intraprediction mode are both employed to select the best coding mode for each coding unit (CU). This technique achieves the highest possible coding efficiency, but it results in extremely large encoding time which obstructs the 3D-HEVC from practical application. In this paper, a fast mode decision algorithm based on the correlation between texture video and depth map is proposed to reduce 3D-HEVC depth intracoding computational complexity. Since the texture video and its associated depth map represent the same scene, there is a high correlation among the prediction mode from texture video and depth map. Therefore, we can skip some specific depth intraprediction modes rarely used in related texture CU. Experimental results show that the proposed algorithm can significantly reduce computational complexity of 3D-HEVC depth intracoding while maintaining coding efficiency. PMID:24963512

  19. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  20. Future-saving audiovisual content for Data Science: Preservation of geoinformatics video heritage with the TIB|AV-Portal

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Plank, Margret; Ziedorn, Frauke

    2015-04-01

    of Science and Technology. The web-based portal allows for extended search capabilities based on enhanced metadata derived by automated video analysis. By combining state-of-the-art multimedia retrieval techniques such as speech-, text-, and image recognition with semantic analysis, content-based access to videos at the segment level is provided. Further, by using the open standard Media Fragment Identifier (MFID), a citable Digital Object Identifier is displayed for each video segment. In addition to the continuously growing footprint of contemporary content, the importance of vintage audiovisual information needs to be considered: This paper showcases the successful application of the TIB|AV-Portal in the preservation and provision of a newly discovered version of a GRASS GIS promotional video produced by US Army -Corps of Enginers Laboratory (US-CERL) in 1987. The video is provides insight into the constraints of the very early days of the GRASS GIS project, which is the oldest active Free and Open Source Software (FOSS) GIS project which has been active for over thirty years. GRASS itself has turned into a collaborative scientific platform and a repository of scientific peer-reviewed code and algorithm/knowledge hub for future generation of scientists [1]. This is a reference case for future preservation activities regarding semantic-enhanced Web 2.0 content from geospatial software projects within Academia and beyond. References: [1] Chemin, Y., Petras V., Petrasova, A., Landa, M., Gebbert, S., Zambelli, P., Neteler, M., Löwe, P.: GRASS GIS: a peer-reviewed scientific platform and future research Repository, Geophysical Research Abstracts, Vol. 17, EGU2015-8314-1, 2015 (submitted)

  1. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  2. Extra Dimensions: 3D in PDF Documentation

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2012-12-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  3. Changes in gene expression, protein content and morphology of chondrocytes cultured on a 3D Random Positioning Machine and 2D rotating clinostat

    NASA Astrophysics Data System (ADS)

    Aleshcheva, Ganna; Hauslage, Jens; Hemmersbach, Ruth; Infanger, Manfred; Bauer, Johann; Grimm, Daniela; Sahana, Jayashree

    Chondrocytes are the only cell type found in human cartilage consisting of proteoglycans and type II collagen. Several studies on chondrocytes cultured either in Space or on a ground-based facility for simulation of microgravity revealed that these cells are very resistant to adverse effects and stress induced by altered gravity. Tissue engineering of chondrocytes is a new strategy for cartilage regeneration. Using a three-dimensional Random Positioning Machine and a 2D rotating clinostat, devices designed to simulate microgravity on Earth, we investigated the early effects of microgravity exposure on human chondrocytes of six different donors after 30 min, 2 h, 4 h, 16 h, and 24 h and compared the results with the corresponding static controls cultured under normal gravity conditions. As little as 30 min of exposure resulted in increased expression of several genes responsible for cell motility, structure and integrity (beta-actin); control of cell growth, cell proliferation, cell differentiation and apoptosis; and cytoskeletal components such as microtubules (beta-tubulin) and intermediate filaments (vimentin). After 4 hours disruptions in the vimentin network were detected. These changes were less dramatic after 16 hours, when human chondrocytes appeared to reorganize their cytoskeleton. However, the gene expression and protein content of TGF-β1 was enhanced for 24 h. Based on the results achieved, we suggest that chondrocytes exposed to simulated microgravity seem to change their extracellular matrix production behavior while they rearrange their cytoskeletal proteins prior to forming three-dimensional aggregates.

  4. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  5. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  6. Enabling Access and Enhancing Comprehension of Video Content for Postsecondary Students with Intellectual Disability

    ERIC Educational Resources Information Center

    Evmenova, Anya S.; Behrmann, Michael M.

    2014-01-01

    There is a great need for new innovative tools to integrate individuals with intellectual disability into educational experiences. This multiple baseline study examined the effects of various adaptations for improving factual and inferential comprehension of non-fiction videos by six postsecondary students with intellectual disability. Video…

  7. Designing Video Narratives to Contextualize Content for ESL Learners: A Design Process Case Study

    ERIC Educational Resources Information Center

    South, Joseph B.; Gabbitas, Bruce; Merrill, Paul F.

    2008-01-01

    In this paper we discuss how the Brigham Young University Technology Assisted Language Learning Group (BYU TALL Group) develops video-based dramatic narratives to increase the amount of context we provide to English as a second language (ESL) learners. First, we discuss the problem of decontextualization in education, the contextualism…

  8. The Role of Violent Video Game Content in Adolescent Development: Boys' Perspectives

    ERIC Educational Resources Information Center

    Olson, Cheryl K.; Kutner, Lawrence A.; Warner, Dorothy E.

    2008-01-01

    Numerous policies have been proposed at the local, state, and national level to restrict youth access to violent video and computer games. Although studies are cited to support policies, there is no published research on how children perceive the uses and influence of violent interactive games. The authors conduct focus groups with 42 boys ages 12…

  9. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  10. The impact of including spatially longitudinal heterogeneities of vessel oxygen content and vascular fraction in 3D tumor oxygenation models on predicted radiation sensitivity

    SciTech Connect

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-04-15

    fraction and D{sub 99}, necrotic fractions ranging from 0% to 97%, and a maximal D{sub 99} increment of 57%. Only minor differences were observed between different vessel architectures, i.e., CVF vs VVF. In the smallest tumor with a low necrotic fraction, the D{sub 99} strictly decreased with increasing blood velocity. Increasing blood velocity also decreased the necrotic fraction in all tumor sizes. VF had the most profound influence on both the necrotic fraction and on D{sub 99}. Conclusions: Our present analysis of necrotic formation and the impact of tumor oxygenation on D{sub 99} demonstrated the importance of including longitudinal variations in vessel oxygen content in tumor models. For small tumors, radiosensitivity was particularly dependent on VF and slightly dependent on the blood velocity and vessel arrangement. These dependences decreased with increasing tumor size, because the necrotic fraction also increased, thereby decreasing the number of viable tumor cells that required sterilization. The authors anticipate that the present model will be useful for estimating tumor oxygenation and radiation response in future detailed studies.

  11. Dynamic Heterogeneity of DNA Methylation and Hydroxymethylation in Embryonic Stem Cell Populations Captured by Single-Cell 3D High-Content Analysis

    PubMed Central

    Tajbakhsh, Jian; Stefanovski, Darko; Tang, George; Wawrowsky, Kolja; Liu, Naiyou; Fair, Jeffrey H.

    2015-01-01

    Cell-surface markers and transcription factors are being used in the assessment of stem cell fate and therapeutic safety, but display significant variability in stem cell cultures. We assessed nuclear patterns of 5-hydroxymethylcytosine (5hmC, associated with pluripotency), a second important epigenetic mark, and its combination with 5-methylcytosine (5mC, associated with differentiation), also in comparison to more established markers of pluripotency (Oct-4) and endodermal differentiation (FoxA2, Sox17) in mouse embryonic stem cells (mESC) over a ten-day differentiation course in vitro: by means of confocal and super-resolution imaging together with high-content analysis, an essential tool in single-cell screening. In summary: 1) We did not measure any significant correlation of putative markers with global 5mC or 5hmC. 2) While average Oct-4 levels stagnated on a cell-population base (0.015 lnIU per day), Sox17 and FoxA2 increased 22-fold and 3-fold faster, respectively (Sox17:0.343 lnIU/day; FoxA2: 0.046 lnIU/day). In comparison, DNA global methylation levels increased 4-fold faster (0.068 lnIU/day), and global hydroxymethylation declined at 0.046 lnIU/day, both with a better explanation of the temporal profile. 3) This progression was concomitant with the occurrence of distinct nuclear codistribution patterns that represented a heterogeneous spectrum of states in differentiation; converging to three major coexisting 5mC/5hmC phenotypes by day 10: 5hmC+/5mC−, 5hmC+/5mC+, and 5hmC−/5mC+ cells. 4) Using optical nanoscopy we could delineate the respective topologies of 5mC/5hmC colocalization in subregions of nuclear DNA: in the majority of 5hmC+/5mC+ cells 5hmC and 5mC predominantly occupied mutually exclusive territories resembling euchromatic and heterochromatic regions, respectively. Simultaneously, in a smaller subset of cells we observed a tighter colocalization of the two cytosine variants, presumably delineating chromatin domains in remodeling. We

  12. Optic flow aided navigation and 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Rollason, Malcolm

    2013-10-01

    An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.

  13. Can Clinical Scenario Videos Improve Dental Students' Perceptions of the Basic Sciences and Ability to Apply Content Knowledge?

    PubMed

    Miller, Cynthia Jayne; Metz, Michael James

    2015-12-01

    Dental students often have difficulty understanding the importance of basic science classes, such as physiology, for their future careers. To help alleviate this problem, the aim of this study was to create and evaluate a series of video modules using simulated patients and custom-designed animations that showcase medical emergencies in the dental practice. First-year students in a dental physiology course formatively assessed their knowledge using embedded questions in each of the three videos; 108 to 114 of the total 120 first-year students answered the questions, for a 90-95% response rate. These responses indicated that while the students could initially recognize the cause of the medical emergency, they had difficulty in applying their knowledge of physiology to the scenario. In two of the three videos, students drastically improved their ability to answer high-level clinical questions at the conclusion of the video. Additionally, when compared to the previous year of the course, there was a significant improvement in unit exam scores on clinically related questions (6.2% increase). Surveys were administered to the first-year students who participated in the video modules and fourth-year students who had completed the course prior to implementation of any clinical material. The response rate for the first-year students was 96% (115/120) and for the fourth-year students was 57% (68/120). The first-year students indicated a more positive perception of the physiology course and its importance for success on board examinations and their dental career than the fourth-year students. The students perceived that the most positive aspects of the modules were the clear applications of physiology to real-life dental situations, the interactive nature of the videos, and the improved student comprehension of course concepts. These results suggest that online modules may be used successfully to improve students' perceptions of the basic sciences and enhance their ability to

  14. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  15. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  16. Preschoolers' Recall of Science Content from Educational Videos Presented with and without Songs

    ERIC Educational Resources Information Center

    Schechter, Rachel L.

    2013-01-01

    This experimental investigation evaluated the impact of educational songs on a child's ability to recall scientific content from an educational television program. Preschoolers' comprehension of the educational content was examined by measuring children's ability to recall the featured science content (the function of a pulley and…

  17. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  18. A novel 3D energetic MOF of high energy content: synthesis and superior explosive performance of a Pb(ii) compound with 5,5'-bistetrazole-1,1'-diolate.

    PubMed

    Shang, Yu; Jin, Bo; Peng, Rufang; Liu, Qiangqiang; Tan, Bisheng; Guo, Zhicheng; Zhao, Jun; Zhang, Qingchun

    2016-09-21

    The development of high-performance insensitive energetic materials is important because of the increasing demand for these materials in military and civilian applications. A novel 3D energetic metal-organic framework (MOF) of exceptionally high energy content, [Pb(BTO)(H2O)]n, was synthesized and structurally characterized by single crystal X-ray diffraction, featuring a three-dimensional parallelogram porous framework, where BTO represents 5,5'-bistetrazole-1,1'-diolate. The thermal stability and energetic properties were determined, exhibiting good thermostability (Td = 309.0 °C), excellent detonation pressure (P) of 53.06 GPa, a detonation velocity (D) of 9.204 km s(-1), and acceptable sensitivity to confirmed impact (IS = 7.5 J). Notably, the complex possesses unprecedented superior density than the reported energetic MOFs. The results highlight this new MOF as a potential energetic material. PMID:27518537

  19. Adult and adolescent exposure to tobacco and alcohol content in contemporary YouTube music videos in Great Britain: a population estimate

    PubMed Central

    Cranwell, Jo; Opazo-Breton, Magdalena; Britton, John

    2016-01-01

    Background We estimate exposure of British adults and adolescents to tobacco and alcohol content from a sample of popular YouTube music videos. Methods British viewing figures were generated from 2 representative online national surveys of adult and adolescent viewing of the 32 most popular videos containing content. 2068 adolescents aged 11–18 years (1010 boys, 1058 girls), and 2232 adults aged 19+years (1052 male, 1180 female) completed the surveys. We used the number of 10 s intervals in the 32 most popular videos containing content to estimate the number of impressions. We extrapolated gross and per capita impressions for the British population from census data and estimated numbers of adults and adolescents who had ever watched the sampled videos. Results From video release to the point of survey, the videos delivered an estimated 1006 million gross impressions of alcohol (95% CI 748 to 1264 million), and 203 million of tobacco (95% CI 151 to 255 million), to the British population. Per capita exposure was around 5 times higher for alcohol than for tobacco, and nearly 4 times higher in adolescents, who were exposed to an average of 52.1 (95% CI 43.4 to 60.9) and 10.5 (95% CI 8.8 to 12.3) alcohol and tobacco impressions, respectively, than in adults (14.1 (95% CI 10.2 to 18.1) and 2.9 (95% CI 2.1 to 3.6)). Exposure rates were higher in girls than in boys. Conclusions YouTube music videos deliver millions of gross impressions of alcohol and tobacco content. Adolescents are exposed much more than adults. Music videos are a major global medium of exposure to such content. PMID:26767404

  20. Content-aware video quality assessment: predicting human perception of quality using peak signal to noise ratio and spatial/temporal activity

    NASA Astrophysics Data System (ADS)

    Ortiz-Jaramillo, B.; Niño-Castañeda, J.; Platiša, L.; Philips, W.

    2015-03-01

    Since the end-user of video-based systems is often a human observer, prediction of human perception of quality (HPoQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures, one problem is the lack of generalizability. This is mainly due to the strong dependency between HPoQ and video content. Although this problem is well-known, few existing methods directly account for the influence of video content on HPoQ. This paper propose a new method to predict HPoQ by using simple distortion measures and introducing video content features in their computation. Our methodology is based on analyzing the level of spatio-temporal activity and combining HPoQ content related parameters with simple distortion measures. Our results show that even very simple distortion measures such as PSNR and simple spatio-temporal activity measures lead to good results. Results over four different public video quality databases show that the proposed methodology, while faster and simpler, is competitive with current state-of-the-art methods, i.e., correlations between objective and subjective assessment higher than 80% and it is only two times slower than PSNR.

  1. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  2. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  3. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  4. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  5. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  6. Using YouTube Videos as a Primer to Affect Academic Content Retention

    ERIC Educational Resources Information Center

    Duverger, Philippe; Steffes, Erin M.

    2012-01-01

    College students today watch more content, academic or not, on the Internet than on any other media. Consequently, the authors argue that using any of these media, especially YouTube.com in particular, is an effective way to not only reach students, but also capture their attention and interest while increasing retention of academic content. Using…

  7. Sexually explicit media on the internet: a content analysis of sexual behaviors, risk, and media characteristics in gay male adult videos.

    PubMed

    Downing, Martin J; Schrimshaw, Eric W; Antebi, Nadav; Siegel, Karolynn

    2014-05-01

    Recent research suggests that viewing sexually explicit media (SEM), i.e., adult videos, may influence sexual risk taking among men who have sex with men. Despite this evidence, very little is known about the content of gay male SEM on the Internet, including the prevalence of sexual risk behaviors and their relation to video- and performer-characteristics, viewing frequency, and favorability. The current study content analyzed 302 sexually explicit videos featuring male same-sex performers that were posted to five highly trafficked adult-oriented websites. Findings revealed that gay male SEM on the Internet features a variety of conventional and nonconventional sexual behaviors. There was a substantial prevalence of unprotected anal intercourse (UAI) (34 %) and was virtually the same as the prevalence of anal sex with a condom (36 %). The presence of UAI was not associated with video length, amateur production, number of video views, favorability, or website source. However, the presence of other potentially high-risk behaviors (e.g., ejaculation in the mouth, and ejaculation on/in/rubbed into the anus) was associated with longer videos, more views, and group sex videos (three or more performers). The findings of high levels of sexual risk behavior and the fact that there was virtually no difference in the prevalence of anal sex with and without a condom in gay male SEM have important implications for HIV prevention efforts, future research on the role of SEM on sexual risk taking, and public health policy. PMID:23733156

  8. Sexually Explicit Media on the Internet: A Content Analysis of Sexual Behaviors, Risk, and Media Characteristics in Gay Male Adult Videos

    PubMed Central

    Downing, Martin J.; Schrimshaw, Eric W.; Antebi, Nadav; Siegel, Karolynn

    2013-01-01

    Recent research suggests that viewing sexually explicit media (SEM), i.e., adult videos, may influence sexual risk taking among men who have sex with men (MSM). Despite this evidence, very little is known about the content of gay male SEM on the Internet, including the prevalence of sexual risk behaviors and their relation to video- and performer-characteristics, viewing frequency, and favorability. The current study content analyzed 302 sexually explicit videos featuring male same-sex performers that were posted to five highly trafficked adult-oriented websites. Findings revealed that gay male SEM on the Internet features a variety of conventional and nonconventional sexual behaviors. There was a substantial prevalence of unprotected anal intercourse (UAI) (34%) and was virtually the same as the prevalence of anal sex with a condom (36%). The presence of UAI was not associated with video length, amateur production, number of video views, favorability, or website source. However, the presence of other potentially high-risk behaviors (e.g., ejaculation in the mouth, and ejaculation on/in/rubbed into the anus) was associated with longer videos, more views, and group sex videos (three or more performers). The findings of high levels of sexual risk behavior and the fact that there was virtually no difference in the prevalence of anal sex with and without a condom in gay male SEM have important implications for HIV prevention efforts, future research on the role of SEM on sexual risk taking, and public health policy. PMID:23733156

  9. Fast 3D shape measurements using laser speckle projection

    NASA Astrophysics Data System (ADS)

    Schaffer, Martin; Grosse, Marcus; Harendt, Bastian; Kowarschik, Richard

    2011-05-01

    3D measurement setups based on structured light projection are widely used for many industrial applications. Due to intense research in the past the accuracy is comparably high in connection with relatively low cost of the equipment. But facing higher acquisition rates in industries especially for chain assembling lines there are still hurdles to take when accelerating 3D measurements and at the same time retaining accuracies. We developed a projection technique that uses laser speckles to enable fast 3D measurements with statistically structured light patterns. In combination with a temporal correlation technique dense and accurate 3D reconstructions at nearly video rate can be achieved.

  10. Video games.

    PubMed

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values. PMID:16111624

  11. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  12. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  13. Low Complexity Mode Decision for 3D-HEVC

    PubMed Central

    Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  14. Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video

    NASA Astrophysics Data System (ADS)

    Banitalebi-Dehkordi, Amin; Nasiopoulos, Eleni; Pourazad, Mahsa T.; Nasiopoulos, Panos

    2016-01-01

    Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is well explored for two-dimensional (2-D) image and video content, there have been only a few attempts made to design three-dimensional (3-D) saliency prediction models. Newly proposed 3-D VAMs have to be validated over large-scale video saliency prediction datasets, which also contain results of eye-tracking information. There are several publicly available eye-tracking datasets for 2-D image and video content. In the case of 3-D, however, there is still a need for large-scale video saliency datasets for the research community for validating different 3-D VAMs. We introduce a large-scale dataset containing eye-tracking data collected from 61 stereoscopic 3-D videos (and also 2-D versions of those), and 24 subjects participated in a free-viewing test. We evaluate the performance of the existing saliency detection methods over the proposed dataset. In addition, we created an online benchmark for validating the performance of the existing 2-D and 3-D VAMs and facilitating the addition of new VAMs to the benchmark. Our benchmark currently contains 50 different VAMs.

  15. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  16. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    PubMed

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  17. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    PubMed Central

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  18. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  19. Improving student learning via mobile phone video content: Evidence from the BridgeIT India project

    NASA Astrophysics Data System (ADS)

    Wennersten, Matthew; Quraishy, Zubeeda Banu; Velamuri, Malathi

    2015-08-01

    Past efforts invested in computer-based education technology interventions have generated little evidence of affordable success at scale. This paper presents the results of a mobile phone-based intervention conducted in the Indian states of Andhra Pradesh and Tamil Nadu in 2012-13. The BridgeIT project provided a pool of audio-visual learning materials organised in accordance with a system of syllabi pacing charts. Teachers of Standard 5 and 6 English and Science classes were notified of the availability of new videos via text messages (SMS), which they downloaded onto their phones using an open-source application and showed, with suggested activities, to students on a TV screen using a TV-out cable. In their evaluation of this project, the authors of this paper found that the test scores of children who experienced the intervention improved by 0.36 standard deviations in English and 0.98 standard deviations in Science in Andhra Pradesh, relative to students in similar classrooms who did not experience the intervention. Differences between treatment and control schools in Tamil Nadu were less marked. The intervention was also cost-effective, relative to other computer-based interventions. Based on these results, the authors argue that is possible to use mobile phones to produce a strong positive and statistically significant effect in terms of teaching and learning quality across a large number of classrooms in India at a lower cost per student than past computer-based interventions.

  20. Identification of spatially corresponding imagery using content-based image retrieval in the context of UAS video exploitation

    NASA Astrophysics Data System (ADS)

    Brüstle, Stefan; Manger, Daniel; Mück, Klaus; Heinze, Norbert

    2014-06-01

    For many tasks in the fields of reconnaissance and surveillance it is important to know the spatial location represented by the imagery to be exploited. A task involving the assessment of changes, e.g. the appearance or disappearance of an object of interest at a certain location, can typically not be accomplished without spatial location information associated with the imagery. Often, such georeferenced imagery is stored in an archive enabling the user to query for the data with respect to its spatial location. Thus, the user is able to effectively find spatially corresponding imagery to be used for change detection tasks. In the field of exploitation of video taken from unmanned aerial systems (UAS), spatial location data is usually acquired using a GPS receiver, together with an INS device providing the sensor orientation, both integrated in the UAS. If during a flight valid GPS data becomes unavailable for a period of time, e.g. due to sensor malfunction, transmission problems or jamming, the imagery gathered during that time is not applicable for change detection tasks based merely on its georeference. Furthermore, GPS and INS inaccuracy together with a potentially poor knowledge of ground elevation can also render location information inapplicable. On the other hand, change detection tasks can be hard to accomplish even if imagery is well georeferenced as a result of occlusions within the imagery, due to e.g. clouds or fog, or image artefacts, due to e.g. transmission problems. In these cases a merely georeference based approach to find spatially corresponding imagery can also be inapplicable. In this paper, we present a search method based on the content of the images to find imagery spatially corresponding to given imagery independent from georeference quality. Using methods from content-based image retrieval, we build an image database which allows for querying even large imagery archives efficiently. We further evaluate the benefits of this method in the

  1. Pedagogical Content Knowledge as Reflected in Teacher-Student Interactions: Analysis of Two Video Cases

    ERIC Educational Resources Information Center

    Alonzo, Alicia C.; Kobarg, Mareike; Seidel, Tina

    2012-01-01

    Despite the theorized centrality of pedagogical content knowledge (PCK) for teaching, we have little evidence of the relationship between PCK and students' learning and know relatively little about how to help teachers to develop PCK. This study is a preliminary attempt to address these gaps in our knowledge of PCK through exploration of two…

  2. Research-Based Strategies for Teaching Content to Students with Intellectual Disabilities: Adapted Videos

    ERIC Educational Resources Information Center

    Evmenova, Anna S.; Behrmann, Michael M.

    2011-01-01

    Teachers are always seeking any visual and/or auditory supports to facilitate students' comprehension and acquisition of difficult concepts associated with academic content. Such supports are even more important for students with intellectual disabilities who regardless of their abilities and needs are required to have access and active…

  3. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  4. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  5. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  6. 3D Integration for Wireless Multimedia

    NASA Astrophysics Data System (ADS)

    Kimmich, Georg

    The convergence of mobile phone, internet, mapping, gaming and office automation tools with high quality video and still imaging capture capability is becoming a strong market trend for portable devices. High-density video encode and decode, 3D graphics for gaming, increased application-software complexity and ultra-high-bandwidth 4G modem technologies are driving the CPU performance and memory bandwidth requirements close to the PC segment. These portable multimedia devices are battery operated, which requires the deployment of new low-power-optimized silicon process technologies and ultra-low-power design techniques at system, architecture and device level. Mobile devices also need to comply with stringent silicon-area and package-volume constraints. As for all consumer devices, low production cost and fast time-to-volume production is key for success. This chapter shows how 3D architectures can bring a possible breakthrough to meet the conflicting power, performance and area constraints. Multiple 3D die-stacking partitioning strategies are described and analyzed on their potential to improve the overall system power, performance and cost for specific application scenarios. Requirements and maturity of the basic process-technology bricks including through-silicon via (TSV) and die-to-die attachment techniques are reviewed. Finally, we highlight new challenges which will arise with 3D stacking and an outlook on how they may be addressed: Higher power density will require thermal design considerations, new EDA tools will need to be developed to cope with the integration of heterogeneous technologies and to guarantee signal and power integrity across the die stack. The silicon/wafer test strategies have to be adapted to handle high-density IO arrays, ultra-thin wafers and provide built-in self-test of attached memories. New standards and business models have to be developed to allow cost-efficient assembly and testing of devices from different silicon and technology

  7. 3D model reconstruction of underground goaf

    NASA Astrophysics Data System (ADS)

    Fang, Yuanmin; Zuo, Xiaoqing; Jin, Baoxuan

    2005-10-01

    Constructing 3D model of underground goaf, we can control the process of mining better and arrange mining work reasonably. However, the shape of goaf and the laneway among goafs are very irregular, which produce great difficulties in data-acquiring and 3D model reconstruction. In this paper, we research on the method of data-acquiring and 3D model construction of underground goaf, building topological relation among goafs. The main contents are as follows: a) The paper proposed an efficient encoding rule employed to structure the field measurement data. b) A 3D model construction method of goaf is put forward, which by means of combining several TIN (triangulated irregular network) pieces, and an efficient automatic processing algorithm of boundary of TIN is proposed. c) Topological relation of goaf models is established. TIN object is the basic modeling element of goaf 3D model, and the topological relation among goaf is created and maintained by building the topological relation among TIN objects. Based on this, various 3D spatial analysis functions can be performed including transect and volume calculation of goaf. A prototype is developed, which can realized the model and algorithm proposed in this paper.

  8. User experience while viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C.A.; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550

  9. Progressive content-based retrieval of image and video with adaptive and iterative refinement

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Turek, John Joseph Edward (Inventor); Castelli, Vittorio (Inventor); Chen, Ming-Syan (Inventor)

    1998-01-01

    A method and apparatus for minimizing the time required to obtain results for a content based query in a data base. More specifically, with this invention, the data base is partitioned into a plurality of groups. Then, a schedule or sequence of groups is assigned to each of the operations of the query, where the schedule represents the order in which an operation of the query will be applied to the groups in the schedule. Each schedule is arranged so that each application of the operation operates on the group which will yield intermediate results that are closest to final results.

  10. Using the Technological Pedagogical Content Knowledge (TPCK) Framework to Explore Teachers' Perceptions of the Role of Technology in the Implementation of mCLASSRTM: Reading 3D

    ERIC Educational Resources Information Center

    Wilson, Melody Tyler

    2012-01-01

    This qualitative study considers the perceptions of teachers from one rural county in North Carolina who implemented the program implementation of mCLASSRTM: Reading 3D. Reading 3D is an electronic early literacy assessment that is designed to assist teachers in planning appropriate literacy instruction based on student needs by offering immediate…

  11. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server.

    PubMed

    Cannone, Jamie J; Sweeney, Blake A; Petrov, Anton I; Gutell, Robin R; Zirbel, Craig L; Leontis, Neocles

    2015-07-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  12. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  13. Learning Projectile Motion with the Computer Game ``Scorched 3D``

    NASA Astrophysics Data System (ADS)

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.

  14. Examining the Use of Video Study Groups for Developing Literacy Pedagogical Content Knowledge of Critical Elements of Strategy Instruction with Elementary Teachers

    ERIC Educational Resources Information Center

    Shanahan, Lynn E.; Tochelli, Andrea L.

    2014-01-01

    This collective case study explored what nine elementary teachers' video study group discussions revealed about their understanding of pedagogical content knowledge for an explicit reading strategy instruction framework, Critical Elements of Strategy Instruction (CESI). Qualitative methods were used to inductively and deductively analyze…

  15. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  16. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  17. User-Appropriate Viewer for High Resolution Interactive Engagement with 3d Digital Cultural Artefacts

    NASA Astrophysics Data System (ADS)

    Gillespie, D.; La Pensée, A.; Cooper, M.

    2013-07-01

    Three dimensional (3D) laser scanning is an important documentation technique for cultural heritage. This technology has been adopted from the engineering and aeronautical industry and is an invaluable tool for the documentation of objects within museum collections (La Pensée, 2008). The datasets created via close range laser scanning are extremely accurate and the created 3D dataset allows for a more detailed analysis in comparison to other documentation technologies such as photography. The dataset can be used for a range of different applications including: documentation; archiving; surface monitoring; replication; gallery interactives; educational sessions; conservation and visualization. However, the novel nature of a 3D dataset is presenting a rather unique challenge with respect to its sharing and dissemination. This is in part due to the need for specialised 3D software and a supported graphics card to display high resolution 3D models. This can be detrimental to one of the main goals of cultural institutions, which is to share knowledge and enable activities such as research, education and entertainment. This has limited the presentation of 3D models of cultural heritage objects to mainly either images or videos. Yet with recent developments in computer graphics, increased internet speed and emerging technologies such as Adobe's Stage 3D (Adobe, 2013) and WebGL (Khronos, 2013), it is now possible to share a dataset directly within a webpage. This allows website visitors to interact with the 3D dataset allowing them to explore every angle of the object, gaining an insight into its shape and nature. This can be very important considering that it is difficult to offer the same level of understanding of the object through the use of traditional mediums such as photographs and videos. Yet this presents a range of problems: this is a very novel experience and very few people have engaged with 3D objects outside of 3D software packages or games. This paper

  18. 3D-model building of the jaw impression

    NASA Astrophysics Data System (ADS)

    Ahmed, Moumen T.; Yamany, Sameh M.; Hemayed, Elsayed E.; Farag, Aly A.

    1997-03-01

    A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.

  19. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  20. Repeated-Viewing and Co-Viewing of an Animated Video: An Examination of Factors that Impact on Young Children's Comprehension of Video Content

    ERIC Educational Resources Information Center

    Skouteris, Helen; Kelly, Leanne

    2006-01-01

    The experiment reported here was concerned with the effect of repeat-viewing and adult co-viewing on the comprehension of an animated feature length movie. Four- to six-year-old children watched a movie on video either once or five times, and either with their mother present or on their own. The findings revealed that, after controlling for…

  1. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  2. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  3. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  4. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  5. Extra dimensions: 3D in PDF documentation

    SciTech Connect

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  6. Extra dimensions: 3D in PDF documentation

    DOE PAGESBeta

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  7. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  8. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  9. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  10. Characterization of social video

    NASA Astrophysics Data System (ADS)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  11. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  12. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  13. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. PMID:20439141

  14. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  15. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  16. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  17. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  18. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  19. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  20. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  1. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  2. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  3. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  4. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are

  5. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are

  6. Projection type transparent 3D display using active screen

    NASA Astrophysics Data System (ADS)

    Kamoshita, Hiroki; Yendo, Tomohiro

    2015-05-01

    Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.

  7. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  8. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  9. Immersive 3D geovisualisation in higher education

    NASA Astrophysics Data System (ADS)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2014-05-01

    that significantly contributed to the hundred-year flooding in Dresden in 2002, we empirically evaluated the usefulness of this immersive 3D technology towards learning success. Results show that immersive 3D geovisualisation have educational and content-related advantages compared to 2D geovisualisations through the mentioned benefits. This innovative way of geovisualisation is thus not only entertaining and motivating for students, but can also be constructive for research studies by, for instance, facilitating the study of complex environments or decision-making processes.

  10. Participatory Gis: Experimentations for a 3d Social Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Zamboni, G.

    2013-08-01

    The dawn of GeoWeb 2.0, the geographic extension of Web 2.0, has opened new possibilities in terms of online dissemination and sharing of geospatial contents, thus laying the foundations for a fruitful development of Participatory GIS (PGIS). The purpose of the study is to investigate the extension of PGIS applications, which are quite mature in the traditional bi-dimensional framework, up to the third dimension. More in detail, the system should couple a powerful 3D visualization with an increase of public participation by means of a tool allowing data collecting from mobile devices (e.g. smartphones and tablets). The PGIS application, built using the open source NASA World Wind virtual globe, is focussed on the cultural and tourism heritage of Como city, located in Northern Italy. An authentication mechanism was implemented, which allows users to create and manage customized projects through cartographic mash-ups of Web Map Service (WMS) layers. Saved projects populate a catalogue which is available to the entire community. Together with historical maps and the current cartography of the city, the system is also able to manage geo-tagged multimedia data, which come from user field-surveys performed through mobile devices and report POIs (Points Of Interest). Each logged user can then contribute to POIs characterization by adding textual and multimedia information (e.g. images, audios and videos) directly on the globe. All in all, the resulting application allows users to create and share contributions as it usually happens on social platforms, additionally providing a realistic 3D representation enhancing the expressive power of data.

  11. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  12. You can't take it with you? Effects of handheld portable media consoles on physiological and psychological responses to video game and movie content.

    PubMed

    Ivory, James D; Magee, Robert G

    2009-06-01

    Portable media consoles are becoming extremely popular devices for viewing a number of different types of media content, both for entertainment and for educational purposes. Given the increasingly heavy use of portable consoles as an alternative to traditional television-style monitors, it is important to investigate how physiological and psychological effects of portable consoles may differ from those of television-based consoles, because such differences in physiological and psychological responses may precipitate differences in the delivered content's effectiveness. Because portable consoles are popular as a delivery system for multiple types of media content, such as movies and video games, it is also important to investigate whether differences between the effects of portable and television-based consoles are consistent across multiple types of media. This article reports a 2 x 2 (console: portable or television-based x medium: video game or movie) mixed factorial design experiment with physiological arousal and self-reported flow experience as dependent variables, designed to explore whether console type affects media experiences and whether these effects are consistent across different media. Results indicate that portable media consoles evoke lower levels of physiological arousal and flow experience and that this effect is consistent for both video games and movies. These findings suggest that even though portable media consoles are often convenient compared to television-based consoles, the convenience may come at a cost in terms of the user experience. PMID:19445637

  13. GPM 3D Video Flyby of Typhoon Lionrock

    NASA Video Gallery

    NASA/JAXA's GPM core satellite saw very heavy precipitation occurring just southeast of Typhoon Lionrock's eye and intense rainfall within feeder bands. Tall thunderstorm towers in the eye wall wer...

  14. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  15. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  16. Distributed network of integrated 3D sensors for transportation security applications

    NASA Astrophysics Data System (ADS)

    Hejmadi, Vic; Garcia, Fred

    2009-05-01

    The US Port Security Agency has strongly emphasized the needs for tighter control at transportation hubs. Distributed arrays of miniature CMOS cameras are providing some solutions today. However, due to the high bandwidth required and the low valued content of such cameras (simple video feed), large computing power and analysis algorithms as well as control software are needed, which makes such an architecture cumbersome, heavy, slow and expensive. We present a novel technique by integrating cheap and mass replicable stealth 3D sensing micro-devices in a distributed network. These micro-sensors are based on conventional structures illumination via successive fringe patterns on the object to be sensed. The communication bandwidth between each sensor remains very small, but is of very high valued content. Key technologies to integrate such a sensor are digital optics and structured laser illumination.

  17. Debris Dispersion Model Using Java 3D

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Bardina, Jorge

    2004-01-01

    This paper describes web based simulation of Shuttle launch operations and debris dispersion. Java 3D graphics provides geometric and visual content with suitable mathematical model and behaviors of Shuttle launch. Because the model is so heterogeneous and interrelated with various factors, 3D graphics combined with physical models provides mechanisms to understand the complexity of launch and range operations. The main focus in the modeling and simulation covers orbital dynamics and range safety. Range safety areas include destruct limit lines, telemetry and tracking and population risk near range. If there is an explosion of Shuttle during launch, debris dispersion is explained. The shuttle launch and range operations in this paper are discussed based on the operations from Kennedy Space Center, Florida, USA.

  18. Analytical augmentation of 3D simulation environments

    NASA Astrophysics Data System (ADS)

    Loughran, Julia J.; Stahl, Marchelle M.

    1998-05-01

    This paper describes an approach for augmenting three- dimensional (3D) virtual environments (VEs) with analytic information and multimedia annotations to enhance training and education applications. Analytic or symbolic information in VEs is presented as bar charts, text, graphical overlays, or with the use of color. Analytic results can be computed and displayed in the VE at run-time or, more likely, while replaying a simulation. These annotations would typically include computations of pre-defined Measures of Performance (MOPs) or Measures of Effectiveness (MOEs) associated with the training or educational goals of the simulation. Multimedia annotations are inserted into the VE by the user and may include: a drawing or whiteboarding capability, enabling participants to insert written text and/or graphics into the two-dimensional (2D) or 3D world; audio comments, and/or video recordings. These annotations can clarify a point, capture teacher feedback, or elaborate on the student's perspective or understanding of the experience. The annotations are captured in the VE either synchronously or asynchronously from the users (students and instructors), during simulation execution or afterward during a replay. When replaying or reviewing the simulation, the embedded annotations can be reviewed by a single user or by multiple users through the use of collaboration technologies. By augmenting 3D virtual environments with analytic and multimedia annotations, the education and training experience may be enhanced. The annotations can offer more effective feedback, enhance understanding, and increase participation. They may also support distance learning by promoting student/teacher interaction without co-location.

  19. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  20. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  1. Clinical Assessment of Stereoacuity and 3-D Stereoscopic Entertainment

    PubMed Central

    Tidbury, Laurence P.; Black, Robert H.; O’Connor, Anna R.

    2015-01-01

    Abstract Background/Aims: The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS. Methods: Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips. Results: As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16–62 years) decreased (eg, 110”, 280”, 340”, and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident. Conclusion: If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made. PMID:26669421

  2. The GIRAFFE Archive: 1D and 3D Spectra

    NASA Astrophysics Data System (ADS)

    Royer, F.; Jégouzo, I.; Tajahmady, F.; Normand, J.; Chilingarian, I.

    2013-10-01

    The GIRAFFE Archive (http://giraffe-archive.obspm.fr) contains the reduced spectra observed with the intermediate and high resolution multi-fiber spectrograph installed at VLT/UT2 (ESO). In its multi-object configuration and the different integral field unit configurations, GIRAFFE produces 1D spectra and 3D spectra. We present here the status of the archive and the different functionalities to select and download both 1D and 3D data products, as well as the present content. The two collections are available in the VO: the 1D spectra (summed in the case of integral field observations) and the 3D field observations. These latter products can be explored using the VO Paris Euro3D Client (http://voplus.obspm.fr/ chil/Euro3D).

  3. Development of an automultiscopic true 3D display (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Kurtz, Russell M.; Pradhan, Ranjit D.; Aye, Tin M.; Yu, Kevin H.; Okorogu, Albert O.; Chua, Kang-Bin; Tun, Nay; Win, Tin; Schindler, Axel

    2005-05-01

    True 3D displays, whether generated by volume holography, merged stereopsis (requiring glasses), or autostereoscopic methods (stereopsis without the need for special glasses), are useful in a great number of applications, ranging from training through product visualization to computer gaming. Holography provides an excellent 3D image but cannot yet be produced in real time, merged stereopsis results in accommodation-convergence conflict (where distance cues generated by the 3D appearance of the image conflict with those obtained from the angular position of the eyes) and lacks parallax cues, and autostereoscopy produces a 3D image visible only from a small region of space. Physical Optics Corporation is developing the next step in real-time 3D displays, the automultiscopic system, which eliminates accommodation-convergence conflict, produces 3D imagery from any position around the display, and includes true image parallax. Theory of automultiscopic display systems is presented, together with results from our prototype display, which produces 3D video imagery with full parallax cues from any viewing direction.

  4. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  5. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  6. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  7. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  8. Art History in 3-D

    ERIC Educational Resources Information Center

    Snyder, Jennifer

    2012-01-01

    Students often have a hard time equating time spent on art history as time well spent in the art room. Likewise, art teachers struggle with how to keep interest in their classrooms high when the subject turns to history. Some teachers show endless videos, with the students nodding sleepily along to the narrator. Others try to incorporate small…

  9. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  10. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  11. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  12. Two Eyes, 3D: Stereoscopic Design Principles

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Subbarao, M.; Wyatt, R.

    2013-01-01

    Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.

  13. Content, Interaction, or Both? Synthesizing Two German Traditions in a Video Study on Learning to Explain in Mathematics Classroom Microcultures

    ERIC Educational Resources Information Center

    Prediger, Susanne; Erath, Kirstin

    2014-01-01

    How do students learn to explain? We take this exemplary research question for presenting two antagonist traditions in German mathematics education research and their synthesis in an ongoing video study. These two traditions are (1) the German Didaktik approach that can be characterized by its epistemologically sensitive analyses and…

  14. Supported eText in Captioned Videos: A Comparison of Expanded versus Standard Captions on Student Comprehension of Educational Content

    ERIC Educational Resources Information Center

    Anderson-Inman, Lynne; Terrazas-Arellanes, Fatima E.

    2009-01-01

    Expanded captions are designed to enhance the educational value by linking unfamiliar words to one of three types of information: vocabulary definitions, labeled illustrations, or concept maps. This study investigated the effects of expanded captions versus standard captions on the comprehension of educational video materials on DVD by secondary…

  15. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  16. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  17. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  18. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  19. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  20. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  1. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  2. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  3. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  4. User experience while viewing stereoscopic 3D television.

    PubMed

    Read, Jenny C A; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the 'nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. PMID:24874550

  5. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  6. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  7. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  8. Interactive 3d Landscapes on Line

    NASA Astrophysics Data System (ADS)

    Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.

    2011-09-01

    The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.

  9. 3D Endoscope to Boost Safety, Cut Cost of Surgery

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Researchers at the Jet Propulsion Laboratory worked with the brain surgeon who directs the Skull Base Institute in Los Angeles to create the first endoscope fit for brain surgery and capable of producing 3D video images. It is also the first to be able to steer its lens back and forth. These improvements to visibility are expected to improve safety, speeding patient recovery and reducing medical costs.

  10. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  11. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  12. Interventional video tomography

    NASA Astrophysics Data System (ADS)

    Truppe, Michael J.; Pongracz, Ferenc; Ploder, Oliver; Wagner, Arne; Ewers, Rolf

    1995-05-01

    Interventional Video Tomography (IVT) is a new imaging modality for Image Directed Surgery to visualize in real-time intraoperatively the spatial position of surgical instruments relative to the patient's anatomy. The video imaging detector is based on a special camera equipped with an optical viewing and lighting system and electronic 3D sensors. When combined with an endoscope it is used for examining the inside of cavities or hollow organs of the body from many different angles. The surface topography of objects is reconstructed from a sequence of monocular video or endoscopic images. To increase accuracy and speed of the reconstruction the relative movement between objects and endoscope is continuously tracked by electronic sensors. The IVT image sequence represents a 4D data set in stereotactic space and contains image, surface topography and motion data. In ENT surgery an IVT image sequence of the planned and so far accessible surgical path is acquired prior to surgery. To simulate the surgical procedure the cross sectional imaging data is superimposed with the digitally stored IVT image sequence. During surgery the video sequence component of the IVT simulation is substituted by the live video source. The IVT technology makes obsolete the use of 3D digitizing probes for the patient image coordinate transformation. The image fusion of medical imaging data with live video sources is the first practical use of augmented reality in medicine. During surgery a head-up display is used to overlay real-time reformatted cross sectional imaging data with the live video image.

  13. 3D interconnected porous HA scaffolds with SiO2 additions: effect of SiO2 content and macropore size on the viability of human osteoblast cells.

    PubMed

    Nikom, Jaru; Charoonpatrapong-Panyayong, Kanokwan; Kedjarune-Leggat, Ureporn; Stevens, Ron; Kosachan, Nudthakarn; Jaroenworaluck, Angkhana

    2013-08-01

    3D interconnected porous scaffolds of HA and HA with various additions of SiO2 were fabricated using a polymeric template technique, to make bioceramic scaffolds consisting of macrostructures of the interconnected macropores. Three different sizes of the polyurethane template were used in the fabrication process to form different size interconnected macropores, to study the effect of pore size on human osteoblast cell viability. The template used allowed fabrication of scaffolds with pore sizes of 45, 60, and 75 ppi, respectively. Scanning microscopy was used extensively to observe the microstructure of the sintered samples and the characteristics of cells growing on the HA surfaces of the interconnected macropores. It has been clearly demonstrated that the SiO2 addition has influenced both the phase transformation of HA to TCP (β-TCP and α-TCP) and also affected the human osteoblast cell viability grown on these scaffolds. PMID:23355495

  14. 3D printing of natural organic materials by photochemistry

    NASA Astrophysics Data System (ADS)

    Da Silva Gonçalves, Joyce Laura; Valandro, Silvano Rodrigo; Wu, Hsiu-Fen; Lee, Yi-Hsiung; Mettra, Bastien; Monnereau, Cyrille; Schmitt Cavalheiro, Carla Cristina; Pawlicka, Agnieszka; Focsan, Monica; Lin, Chih-Lang; Baldeck, Patrice L.

    2016-03-01

    In previous works, we have used two-photon induced photochemistry to fabricate 3D microstructures based on proteins, anti-bodies, and enzymes for different types of bio-applications. Among them, we can cite collagen lines to guide the movement of living cells, peptide modified GFP biosensing pads to detect Gram positive bacteria, anti-body pads to determine the type of red blood cells, and trypsin columns in a microfluidic channel to obtain a real time biochemical micro-reactor. In this paper, we report for the first time on two-photon 3D microfabrication of DNA material. We also present our preliminary results on using a commercial 3D printer based on a video projector to polymerize slicing layers of gelatine-objects.

  15. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  16. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  17. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  18. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  19. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  20. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  1. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  2. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    NASA Astrophysics Data System (ADS)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  3. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  4. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  5. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  6. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  7. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. New results will be reported at the meeting.

  8. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  9. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  10. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  11. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  12. 3D-Measuring for Head Shape Covering Hair

    NASA Astrophysics Data System (ADS)

    Kato, Tsukasa; Hattori, Koosuke; Nomura, Takuya; Taguchi, Ryo; Hoguro, Masahiro; Umezaki, Taizo

    3D-Measuring is paid to attention because 3D-Display is making rapid spread. Especially, face and head are required to be measured because of necessary or contents production. However, it is a present problem that it is difficult to measure hair. Then, in this research, it is a purpose to measure face and hair with phase shift method. By using sine images arranged for hair measuring, the problems on hair measuring, dark color and reflection, are settled.

  13. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  14. Saliency prediction on stereoscopic videos.

    PubMed

    Kim, Haksub; Lee, Sanghoon; Bovik, Alan Conrad

    2014-04-01

    We describe a new 3D saliency prediction model that accounts for diverse low-level luminance, chrominance, motion, and depth attributes of 3D videos as well as high-level classifications of scenes by type. The model also accounts for perceptual factors, such as the nonuniform resolution of the human eye, stereoscopic limits imposed by Panum's fusional area, and the predicted degree of (dis) comfort felt, when viewing the 3D video. The high-level analysis involves classification of each 3D video scene by type with regard to estimated camera motion and the motions of objects in the videos. Decisions regarding the relative saliency of objects or regions are supported by data obtained through a series of eye-tracking experiments. The algorithm developed from the model elements operates by finding and segmenting salient 3D space-time regions in a video, then calculating the saliency strength of each segment using measured attributes of motion, disparity, texture, and the predicted degree of visual discomfort experienced. The saliency energy of both segmented objects and frames are weighted using models of human foveation and Panum's fusional area yielding a single predictor of 3D saliency. PMID:24565790

  15. A multipath video delivery scheme over diffserv wireless LANs

    NASA Astrophysics Data System (ADS)

    Man, Hong; Li, Yang

    2004-01-01

    This paper presents a joint source coding and networking scheme for video delivery over ad hoc wireless local area networks. The objective is to improve the end-to-end video quality with the constraint of the physical network. The proposed video transport scheme effectively integrates several networking components including load-aware multipath routing, class based queuing (CBQ), and scalable (or layered) video source coding techniques. A typical progressive video coder, 3D-SPIHT, is used to generate multi-layer source data streams. The coded bitstreams are then segmented into multiple sub-streams, each with a different level of importance towards the final video reconstruction. The underlay wireless ad hoc network is designed to support service differentiation. A contention sensitive load aware routing (CSLAR) protocol is proposed. The approach is to discover multiple routes between the source and the destination, and label each route with a load value which indicates its quality of service (QoS) characteristics. The video sub-streams will be distributed among these paths according to their QoS priority. CBQ is also applied to all intermediate nodes, which gives preference to important sub-streams. Through this approach, the scalable source coding techniques are incorporated with differentiated service (DiffServ) networking techniques so that the overall system performance is effectively improved. Simulations have been conducted on the network simulator (ns-2). Both network layer performance and application layer performance are evaluated. Significant improvements over traditional ad hoc wireless network transport schemes have been observed.

  16. Spatial constraints of stereopsis in video displays

    NASA Technical Reports Server (NTRS)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  17. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  18. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  19. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  20. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  1. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  2. Enhancing student interactions with the instructor and content using pen-based technology, YouTube videos, and virtual conferencing.

    PubMed

    Cox, James R

    2011-01-01

    This report describes the incorporation of digital learning elements in organic chemistry and biochemistry courses. The first example is the use of pen-based technology and a large-format PowerPoint slide to construct a map that integrates various metabolic pathways and control points. Students can use this map to visualize the integrated nature of metabolism and how various hormones impact metabolic regulation. The second example is the embedding of health-related YouTube videos directly into PowerPoint presentations. These videos become a part of the course notes and can be viewed within PowerPoint as long as students are online. The third example is the use of a webcam to show physical models during online sessions using web-conferencing software. Various molecular conformations can be shown through the webcam, and snapshots of important conformations can be incorporated into the notes for further discussion and annotation. Each of the digital learning elements discussed in this report is an attempt to use technology to improve the quality of educational resources available outside of the classroom to foster student engagement with ideas and concepts. Biochemistry and Molecular Biology Education Vol. 39, No. 1, pp. 4-9, 2011. PMID:21433246

  3. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  4. Programming standards for effective S-3D game development

    NASA Astrophysics Data System (ADS)

    Schneider, Neil; Matveev, Alexander

    2008-02-01

    When a video game is in development, more often than not it is being rendered in three dimensions - complete with volumetric depth. It's the PC monitor that is taking this three-dimensional information, and artificially displaying it in a flat, two-dimensional format. Stereoscopic drivers take the three-dimensional information captured from DirectX and OpenGL calls and properly display it with a unique left and right sided view for each eye so a proper stereoscopic 3D image can be seen by the gamer. The two-dimensional limitation of how information is displayed on screen has encouraged programming short-cuts and work-arounds that stifle this stereoscopic 3D effect, and the purpose of this guide is to outline techniques to get the best of both worlds. While the programming requirements do not significantly add to the game development time, following these guidelines will greatly enhance your customer's stereoscopic 3D experience, increase your likelihood of earning Meant to be Seen certification, and give you instant cost-free access to the industry's most valued consumer base. While this outline is mostly based on NVIDIA's programming guide and iZ3D resources, it is designed to work with all stereoscopic 3D hardware solutions and is not proprietary in any way.

  5. Development of a 3D CT scanner using cone beam

    NASA Astrophysics Data System (ADS)

    Endo, Masahiro; Kamagata, Nozomu; Sato, Kazumasa; Hattori, Yuichi; Kobayashi, Shigeo; Mizuno, Shinichi; Jimbo, Masao; Kusakabe, Masahiro

    1995-05-01

    In order to acquire 3D data of high contrast objects such as bone, lung and vessels enhanced by contrast media for use in 3D image processing, we have developed a 3D CT-scanner using cone beam x ray. The 3D CT-scanner consists of a gantry and a patient couch. The gantry consists of an x-ray tube designed for cone beam CT and a large area two-dimensional detector mounted on a single frame and rotated around an object in 12 seconds. The large area detector consists of a fluorescent plate and a charge coupled device video camera. The size of detection area was 600 mm X 450 mm capable of covering the total chest. While an x-ray tube was rotated around an object, pulsed x ray was exposed 30 times a second and 360 projected images were collected in a 12 second scan. A 256 X 256 X 256 matrix image (1.25 mm X 1.25 mm X 1.25 mm voxel) was reconstructed by a high-speed reconstruction engine. Reconstruction time was approximately 6 minutes. Cylindrical water phantoms, anesthetized rabbits with or without contrast media, and a Japanese macaque were scanned with the 3D CT-scanner. The results seem promising because they show high spatial resolution in three directions, though there existed several point to be improved. Possible improvements are discussed.

  6. Methods For Electronic 3-D Moving Pictures Without Glasses

    NASA Astrophysics Data System (ADS)

    Collender, Robert B.

    1987-06-01

    This paper describes implementation approaches in image acquisition and playback for 3-D computer graphics, 3-D television and 3-D theatre movies without special glasses. Projection lamps, spatial light modulators, CRT's and dynamic scanning are all eliminated by the application of an active image array, all static components and a semi-specular screen. The resulting picture shows horizontal parallax with a wide horizontal view field (up to 360 de-grees) giving a holographic appearance in full color with smooth continuous viewing without speckle. Static component systems are compared with dynamic component systems using both linear and circular arrays. Implementation of computer graphic systems are shown that allow complex shaded color images to extend from the viewer's eyes to infinity. Large screen systems visible by hundreds of people are feasible by the use of low f-stops and high gain screens in projection. Screen geometries and special screen properties are shown. Viewing characteristics offer no restrictions in view-position over the entire view-field and have a "look-around" feature for all the categories of computer graphics, television and movies. Standard video cassettes and optical discs can also interface the system to generate a 3-D window viewable without glasses. A prognosis is given for technology application to 3-D pictures without glasses that replicate the daily viewing experience. Super-position of computer graphics on real-world pictures is shown feasible.

  7. 3D structure and nuclear targets

    NASA Astrophysics Data System (ADS)

    Dupré, Raphaël; Scopetta, Sergio

    2016-06-01

    Recent experimental and theoretical ideas are laying the ground for a new era in the knowledge of the parton structure of nuclei. We report on two promising directions beyond inclusive deep inelastic scattering experiments, aimed at, among other goals, unveiling the three-dimensional structure of the bound nucleon. The 3D structure in coordinate space can be accessed through deep exclusive processes, whose non-perturbative content is parametrized in terms of generalized parton distributions. In this way the distribution of partons in the transverse plane will be obtained, providing a pictorial view of the realization of the European Muon Collaboration effect. In particular, we show how, through the generalized parton distribution framework, non-nucleonic degrees of freedom in nuclei can be unveiled. Analogously, the momentum space 3D structure can be accessed by studying transverse-momentum-dependent parton distributions in semi-inclusive deep inelastic scattering processes. The status of measurements is also summarized, in particular novel coincidence measurements at high-luminosity facilities, such as Jefferson Laboratory. Finally the prospects for the next years at future facilities, such as the 12GeV Jefferson Laboratory and the Electron Ion Collider, are presented.

  8. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  9. Computer-generated hologram for 3D scene from multi-view images

    NASA Astrophysics Data System (ADS)

    Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong

    2013-05-01

    Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.

  10. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  11. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  12. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  13. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  14. Interlaced MVD format for free viewpoint video

    NASA Astrophysics Data System (ADS)

    Lee, Seok; Lee, Seungsin; Lee, Jaejoon; Wey, Ho-Cheon; Park, Du-Sik; Kim, Chang-Yeong

    2011-03-01

    new 3D video format which consists of one full resolution mono video and half resolution left/right videos is proposed. The proposed 3D video format can generate high quality virtual views from small amount of input data while preserving the compatibility for legacy mono and frame compatible stereo video systems. The center view video is the same with normal mono video data, but left/right views are frame compatible stereo video data. This format was tested in terms of compression efficiency, rendering capability, and backward compatibility. Especially we compared view synthesis quality when virtual views are made from full resolution two views or one original view and the other half resolution view. For frame compatible stereo format, experiments were performed on interlaced method. The proposed format gives BD bit-rate gains of 15%.

  15. 3D Inverse problem: Seawater intrusions

    NASA Astrophysics Data System (ADS)

    Steklova, K.; Haber, E.

    2013-12-01

    Modeling of seawater intrusions (SWI) is challenging as it involves solving the governing equations for variable density flow, multiple time scales and varying boundary conditions. Due to the nonlinearity of the equations and the large aquifer domains, 3D computations are a costly process, particularly when solving the inverse SWI problem. In addition the heads and concentration measurements are difficult to obtain due to mixing, saline wedge location is sensitive to aquifer topography, and there is general uncertainty in initial and boundary conditions and parameters. Some of these complications can be overcome by using indirect geophysical data next to standard groundwater measurements, however, the inverse problem is usually simplified, e.g. by zonation for the parameters based on geological information, steady state substitution of the unknown initial conditions, decoupling the equations or reducing the amount of unknown parameters by covariance analysis. In our work we present a discretization of the flow and solute mass balance equations for variable groundwater (GW) flow. A finite difference scheme is to solve pressure equation and a Semi - Lagrangian method for solute transport equation. In this way we are able to choose an arbitrarily large time step without losing stability up to an accuracy requirement coming from the coupled character of the variable density flow equations. We derive analytical sensitivities of the GW model for parameters related to the porous media properties and also the initial solute distribution. Analytically derived sensitivities reduce the computational cost of inverse problem, but also give insight for maximizing information in collected data. If the geophysical data are available it also enables simultaneous calibration in a coupled hydrogeophysical framework. The 3D inverse problem was tested on artificial time dependent data for pressure and solute content coming from a GW forward model and/or geophysical forward model. Two

  16. The Video Generation.

    ERIC Educational Resources Information Center

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  17. Applications of 2D to 3D conversion for educational purposes

    NASA Astrophysics Data System (ADS)

    Koido, Yoshihisa; Morikawa, Hiroyuki; Shiraishi, Saki; Takeuchi, Soya; Maruyama, Wataru; Nakagori, Toshio; Hirakata, Masataka; Shinkai, Hirohisa; Kawai, Takashi

    2013-03-01

    There are three main approaches creating stereoscopic S3D content: stereo filming using two cameras, stereo rendering of 3D computer graphics, and 2D to S3D conversion by adding binocular information to 2D material images. Although manual "off-line" conversion can control the amount of parallax flexibly, 2D material images are converted according to monocular information in most cases, and the flexibility of 2D to S3D conversion has not been exploited. If the depth is expressed flexibly, comprehensions and interests from converted S3D contents are anticipated to be differed from those from 2D. Therefore, in this study we created new S3D content for education by applying 2D to S3D conversion. For surgical education, we created S3D surgical operation content under a surgeon using a partial 2D to S3D conversion technique which was expected to concentrate viewers' attention on significant areas. And for art education, we converted Ukiyoe prints; traditional Japanese artworks made from a woodcut. The conversion of this content, which has little depth information, into S3D, is expected to produce different cognitive processes from those evoked by 2D content, e.g., the excitation of interest, and the understanding of spatial information. In addition, the effects of the representation of these contents were investigated.

  18. A systematized WYSIWYG pipeline for digital stereoscopic 3D filmmaking

    NASA Astrophysics Data System (ADS)

    Mueller, Robert; Ward, Chris; Hušák, Michal

    2008-02-01

    Digital tools are transforming stereoscopic 3D content creation and delivery, creating an opportunity for the broad acceptance and success of stereoscopic 3D films. Beginning in late 2005, a series of mostly CGI features has successfully initiated the public to this new generation of highly-comfortable, artifact-free digital 3D. While the response has been decidedly favorable, a lack of high-quality live-action films could hinder long-term success. Liveaction stereoscopic films have historically been more time-consuming, costly, and creatively-limiting than 2D films - thus a need arises for a live-action 3D filmmaking process which minimizes such limitations. A unique 'systematized' what-you-see-is-what-you-get (WYSIWYG) pipeline is described which allows the efficient, intuitive and accurate capture and integration of 3D and 2D elements from multiple shoots and sources - both live-action and CGI. Throughout this pipeline, digital tools utilize a consistent algorithm to provide meaningful and accurate visual depth references with respect to the viewing audience in the target theater environment. This intuitive, visual approach introduces efficiency and creativity to the 3D filmmaking process by eliminating both the need for a 'mathematician mentality' of spreadsheets and calculators, as well as any trial and error guesswork, while enabling the most comfortable, 'pixel-perfect', artifact-free 3D product possible.

  19. Guidance for horizontal image translation (HIT) of high definition stereoscopic video production

    NASA Astrophysics Data System (ADS)

    Broberg, David K.

    2011-03-01

    Horizontal image translation (HIT) is an electronic process for shifting the left-eye and right-eye images horizontally as a way to alter the stereoscopic characteristics and alignment of 3D content after signals have been captured by stereoscopic cameras. When used cautiously and with full awareness of the impact on other interrelated aspects of the stereography, HIT is a valuable tool in the post production process as a means to modify stereoscopic content for more comfortable viewing. Most commonly it is used to alter the zero parallax setting (ZPS), to compensate for stereo window violations or to compensate for excessive positive or negative parallax in the source material. As more and more cinematic 3D content migrates to television distribution channels the use of this tool will likely expand. Without proper attention to certain guidelines the use of HIT can actually harm the 3D viewing experience. This paper provides guidance on the most effective use and describes some of the interrelationships and trade-offs. The paper recommends the adoption of the cinematic 2K video format as a 3D source master format for high definition television distribution of stereoscopic 3D video programming.

  20. A 3D mosaic algorithm using disparity map

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Kakeya, Hideki

    2015-03-01

    Conventionally there exist two major methods to create mosaics in 3D videos. One is to duplicate the area of mosaics from the image of one viewpoint (the left view or the right view) to that of the other viewpoint. This method, which is not capable of expressing depth, cannot give viewers a natural perception in 3D. The other method is to create the mosaics separately in the left view and the right view. With this method the depth is expressed in the area of mosaics, but 3D perception is not natural enough. To overcome these problems, we propose a method to create mosaics by using a disparity map. In the proposed method the mosaic of the image from one viewpoint is made with the conventional method, while the mosaic of the image from the other viewpoint is made based on the data of the disparity map so that the mosaic patterns of the two images can give proper depth perception to the viewer. We confirm that the proposed mosaic pattern using a disparity map gives more natural depth perception of the viewer by subjective experiments using a static image and two videos.

  1. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  2. Yogi the rock - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Yogi, a rock taller than rover Sojourner, is the subject of this image, taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The soil in the foreground has been the location of multiple soil mechanics experiments performed by Sojourner's cleated wheels. Pathfinder scientists were able to control the force inflicted on the soil beneath the rover's wheels, giving them insight into the soil's mechanical properties. The soil mechanics experiments were conducted after this image was taken.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  3. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  4. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  5. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  6. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  7. Markerless 3D motion capture for animal locomotion studies.

    PubMed

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  8. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-01-01

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  9. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  10. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  11. Digital Video (DV): A Primer for Developing an Enterprise Video Strategy

    NASA Astrophysics Data System (ADS)

    Talovich, Thomas L.

    2002-09-01

    The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.

  12. 3-D Cavern Enlargement Analyses

    SciTech Connect

    EHGARTNER, BRIAN L.; SOBOLIK, STEVEN R.

    2002-03-01

    Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.

  13. Video Encryption and Decryption on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Iliyasu, Abdullah M.; Venegas-Andraca, Salvador E.; Yang, Huamin

    2015-08-01

    A method for video encryption and decryption on quantum computers is proposed based on color information transformations on each frame encoding the content of the encoding the content of the video. The proposed method provides a flexible operation to encrypt quantum video by means of the quantum measurement in order to enhance the security of the video. To validate the proposed approach, a tetris tile-matching puzzle game video is utilized in the experimental simulations. The results obtained suggest that the proposed method enhances the security and speed of quantum video encryption and decryption, both properties required for secure transmission and sharing of video content in quantum communication.

  14. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  15. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  16. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  17. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  18. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  19. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  20. Using Video To Enhance Content and Delivery Skills in the Basic Oral Communication Course: Summarizing the Uses and Benefits.

    ERIC Educational Resources Information Center

    Glenn, Robert J., III

    The use of videotape technology is an effective pedagogical tool with which to improve the overall performance of students enrolled in sections of basic public speaking. These uses and benefits in the classroom include: (1) practice feedback; (2) identification of style inhibitors; (3) analysis of structural-content issues; (4) suggestions for…

  1. Video Analytics for Indexing, Summarization and Searching of Video Archives

    SciTech Connect

    Trease, Harold E.; Trease, Lynn L.

    2009-08-01

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful" content from image and video data.

  2. Synthesis multi-projector content for multi-projector three dimension display using a layered representation

    NASA Astrophysics Data System (ADS)

    Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua

    2014-11-01

    Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.

  3. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  4. An objective method for 3D quality prediction using visual annoyance and acceptability level

    NASA Astrophysics Data System (ADS)

    Khaustova, Darya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2015-03-01

    This study proposes a new objective metric for video quality assessment. It predicts the impact of technical quality parameters relevant to visual discomfort on human perception. The proposed metric is based on a 3-level color scale: (1) Green - not annoying, (2) Orange - annoying but acceptable, (3) Red - not acceptable. Therefore, each color category reflects viewers' judgment based on stimulus acceptability and induced visual annoyance. The boundary between the "Green" and "Orange" categories defines the visual annoyance threshold, while the boundary between the "Orange" and "Red" categories defines the acceptability threshold. Once the technical quality parameters are measured, they are compared to perceptual thresholds. Such comparison allows estimating the quality of the 3D video sequence. Besides, the proposed metric is adjustable to service or production requirements by changing the percentage of acceptability and/or visual annoyance. The performance of the metric is evaluated in a subjective experiment that uses three stereoscopic scenes. Five view asymmetries with four degradation levels were introduced into initial test content. The results demonstrate high correlations between subjective scores and objective predictions for all view asymmetries.

  5. The Esri 3D city information model

    NASA Astrophysics Data System (ADS)

    Reitz, T.; Schubiger-Banz, S.

    2014-02-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases.

  6. A QoS aware resource allocation strategy for 3D A/V streaming in OFDMA based wireless systems.

    PubMed

    Chung, Young-Uk; Choi, Yong-Hoon; Park, Suwon; Lee, Hyukjoon

    2014-01-01

    Three-dimensional (3D) video is expected to be a "killer app" for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes. PMID:25250377

  7. A QoS Aware Resource Allocation Strategy for 3D A/V Streaming in OFDMA Based Wireless Systems

    PubMed Central

    Chung, Young-uk; Choi, Yong-Hoon; Park, Suwon; Lee, Hyukjoon

    2014-01-01

    Three-dimensional (3D) video is expected to be a “killer app” for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes. PMID:25250377

  8. The LivePhoto Physics videos and video analysis site

    NASA Astrophysics Data System (ADS)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  9. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  10. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  11. A full field, 3-D velocimeter for microgravity crystallization experiments

    NASA Technical Reports Server (NTRS)

    Brodkey, Robert S.; Russ, Keith M.

    1991-01-01

    The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.

  12. A Parameterizable Framework for Replicated Experiments in Virtual 3D Environments

    NASA Astrophysics Data System (ADS)

    Biella, Daniel; Luther, Wolfram

    This paper reports on a parameterizable 3D framework that provides 3D content developers with an initial spatial starting configuration, metaphorical connectors for accessing exhibits or interactive 3D learning objects or experiments, and other optional 3D extensions, such as a multimedia room, a gallery, username identification tools and an avatar selection room. The framework is implemented in X3D and uses a Web-based content management system. It has been successfully used for an interactive virtual museum for key historical experiments and in two additional interactive e-learning implementations: an African arts museum and a virtual science centre. It can be shown that, by reusing the framework, the production costs for the latter two implementations can be significantly reduced and content designers can focus on developing educational content instead of producing cost-intensive out-of-focus 3D objects.

  13. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. PMID:26689324

  14. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  15. Students' Readiness to Move from Consumers to Producers of Digital Video Content: A Cross-Cultural Analysis of Irish and Indian Students

    ERIC Educational Resources Information Center

    Loftus, Maria; Tiernan, Peter; Cherian, Sebastian

    2014-01-01

    Evidence has shown that students have greatly increased their consumption of digital video, principally through video sharing sites. In parallel, students' participation in video sharing and creation has also risen. As educators, we need to question how this can be effectively translated into a positive learning experience for students,…

  16. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  17. Gis-Based Smart Cartography Using 3d Modeling

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Tassetti, A. N.

    2013-08-01

    3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.

  18. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  19. Automated analysis and annotation of basketball video

    NASA Astrophysics Data System (ADS)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  20. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  1. Social Properties of Mobile Video

    NASA Astrophysics Data System (ADS)

    Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex

    Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.

  2. Effects of using a 3D model on the performance of vision algorithms

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Lynch, Robert

    2015-05-01

    In previous work, we have shown how a 3D model can be built in real time and synchronized with the environment. This world model permits a robot to predict dynamics in its environment and classify behaviors. In this paper we evaluate the effect of such a 3D model on the accuracy and speed of various computer vision algorithms, including tracking, optical flow and stereo disparity. We report results based on the KITTI database and on our own videos.

  3. Adaptive image warping for hole prevention in 3D view synthesis.

    PubMed

    Plath, Nils; Knorr, Sebastian; Goldmann, Lutz; Sikora, Thomas

    2013-09-01

    Increasing popularity of 3D videos calls for new methods to ease the conversion process of existing monocular video to stereoscopic or multi-view video. A popular way to convert video is given by depth image-based rendering methods, in which a depth map that is associated with an image frame is used to generate a virtual view. Because of the lack of knowledge about the 3D structure of a scene and its corresponding texture, the conversion of 2D video, inevitably, however, leads to holes in the resulting 3D image as a result of newly-exposed areas. The conversion process can be altered such that no holes become visible in the resulting 3D view by superimposing a regular grid over the depth map and deforming it. In this paper, an adaptive image warping approach as an improvement to the regular approach is proposed. The new algorithm exploits the smoothness of a typical depth map to reduce the complexity of the underlying optimization problem that is necessary to find the deformation, which is required to prevent holes. This is achieved by splitting a depth map into blocks of homogeneous depth using quadtrees and running the optimization on the resulting adaptive grid. The results show that this approach leads to a considerable reduction of the computational complexity while maintaining the visual quality of the synthesized views. PMID:23782807

  4. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  5. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    PubMed Central

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-01-01

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910

  6. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  7. 3D Dynamic Echocardiography with a Digitizer

    NASA Astrophysics Data System (ADS)

    Oshiro, Osamu; Matani, Ayumu; Chihara, Kunihiro

    1998-05-01

    In this paper,a three-dimensional (3D) dynamic ultrasound (US) imaging system,where a US brightness-mode (B-mode) imagetriggered with an R-wave of electrocardiogram (ECG)was obtained with an ultrasound diagnostic deviceand the location and orientation of the US probewere simultaneously measured with a 3D digitizer, is described.The obtained B-mode imagewas then projected onto a virtual 3D spacewith the proposed interpolation algorithm using a Gaussian operator.Furthermore, a 3D image was presented on a cathode ray tube (CRT)and stored in virtual reality modeling language (VRML).We performed an experimentto reconstruct a 3D heart image in systole using this system.The experimental results indicatethat the system enables the visualization ofthe 3D and internal structure of a heart viewed from any angleand has potential for use in dynamic imaging,intraoperative ultrasonography and tele-medicine.

  8. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  9. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  10. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  11. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  12. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  13. Preparation and 3D Tracking of Catalytic Swimming Devices.

    PubMed

    Campbell, Andrew; Archer, Richard; Ebbens, Stephen

    2016-01-01

    We report a method to prepare catalytically active Janus colloids that "swim" in fluids and describe how to determine their 3D motion using fluorescence microscopy. One commonly deployed method for catalytically active colloids to produce enhanced motion is via an asymmetrical distribution of catalyst. Here this is achieved by spin coating a dispersed layer of fluorescent polymeric colloids onto a flat planar substrate, and then using directional platinum vapor deposition to half coat the exposed colloid surface, making a two faced "Janus" structure. The Janus colloids are then re-suspended from the planar substrate into an aqueous solution containing hydrogen peroxide. Hydrogen peroxide serves as a fuel for the platinum catalyst, which is decomposed into water and oxygen, but only on one side of the colloid. The asymmetry results in gradients that produce enhanced motion, or "swimming". A fluorescence microscope, together with a video camera is used to record the motion of individual colloids. The center of the fluorescent emission is found using image analysis to provide an x and y coordinate for each frame of the video. While keeping the microscope focal position fixed, the fluorescence emission from the colloid produces a characteristic concentric ring pattern which is subject to image analysis to determine the particles relative z position. In this way 3D trajectories for the swimming colloid are obtained, allowing swimming velocity to be accurately measured, and physical phenomena such as gravitaxis, which may bias the colloids motion to be detected. PMID:27404327

  14. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  15. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  16. Collaboration on Scene Graph Based 3D Data

    NASA Astrophysics Data System (ADS)

    Ammon, Lorenz; Bieri, Hanspeter

    Professional 3D digital content creation tools, like Alias Maya or discreet 3ds max, offer only limited support for a team of artists to work on a 3D model collaboratively. We present a scene graph repository system that enables fine-grained collaboration on scenes built using standard 3D DCC tools by applying the concept of collaborative versions to a general attributed scene graph. Artists can work on the same scene in parallel without locking out each other. The artists' changes to a scene are regularly merged to ensure that all artists can see each others progress and collaborate on current data. We introduce the concept of indirect changes and indirect conflicts to systematically inspect the effects that collaborative changes have on a scene. Inspecting indirect conflicts helps maintaining scene consistency by systematically looking for inconsistencies at the right places.

  17. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  18. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  19. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  20. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  1. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  2. What Do Social Media Say About Makeovers? A Content Analysis of Cosmetic Surgery Videos and Viewers' Responses on YouTube.

    PubMed

    Wen, Nainan; Chia, Stella C; Hao, Xiaoming

    2015-01-01

    This study examines portrayals of cosmetic surgery on YouTube, where we found a substantial number of cosmetic surgery videos. Most of the videos came from cosmetic surgeons who appeared to be aggressively using social media in their practices. Except for videos that explained cosmetic surgery procedures, most videos in our sample emphasized the benefits of cosmetic surgery, and only a small number of the videos addressed the involved risks. We also found that tactics of persuasive communication-namely, related to message source and message sensation value (MSV)-have been used in Web-based social media to attract viewers' attention and interests. Expert sources were used predominantly, although typical-consumer sources tended to generate greater viewer interest in cosmetic surgery than other types of message sources. High MSV, moreover, was found to increase a video's popularity. PMID:25257243

  3. 3-D seismology in the Arabian Gulf

    SciTech Connect

    Al-Husseini, M.; Chimblo, R.

    1995-08-01

    Since 1977 when Aramco and GSI (Geophysical Services International) pioneered the first 3-D seismic survey in the Arabian Gulf, under the guidance of Aramco`s Chief Geophysicist John Hoke, 3-D seismology has been effectively used to map many complex subsurface geological phenomena. By the mid-1990s extensive 3-D surveys were acquired in Abu Dhabi, Oman, Qatar and Saudi Arabia. Also in the mid-1990`s Bahrain, Kuwait and Dubai were preparing to record surveys over their fields. On the structural side 3-D has refined seismic maps, focused faults and fractures systems, as well as outlined the distribution of facies, porosity and fluid saturation. In field development, 3D has not only reduced drilling costs significantly, but has also improved the understanding of fluid behavior in the reservoir. In Oman, Petroleum Development Oman (PDO) has now acquired the first Gulf 4-D seismic survey (time-lapse 3D survey) over the Yibal Field. The 4-D survey will allow PDO to directly monitor water encroachment in the highly-faulted Cretaceous Shu`aiba reservoir. In exploration, 3-D seismology has resolved complex prospects with structural and stratigraphic complications and reduced the risk in the selection of drilling locations. The many case studies from Saudi Arabia, Oman, Qatar and the United Arab Emirates, which are reviewed in this paper, attest to the effectiveness of 3D seismology in exploration and producing, in clastics and carbonates reservoirs, and in the Mesozoic and Paleozoic.

  4. A 3D Geostatistical Mapping Tool

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  5. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  6. Stereoscopic Investigations of 3D Coulomb Balls

    SciTech Connect

    Kaeding, Sebastian; Melzer, Andre; Arp, Oliver; Block, Dietmar; Piel, Alexander

    2005-10-31

    In dusty plasmas particles are arranged due to the influence of external forces and the Coulomb interaction. Recently Arp et al. were able to generate 3D spherical dust clouds, so-called Coulomb balls. Here, we present measurements that reveal the full 3D particle trajectories from stereoscopic imaging.

  7. 3-D structures of planetary nebulae

    NASA Astrophysics Data System (ADS)

    Steffen, W.

    2016-07-01

    Recent advances in the 3-D reconstruction of planetary nebulae are reviewed. We include not only results for 3-D reconstructions, but also the current techniques in terms of general methods and software. In order to obtain more accurate reconstructions, we suggest to extend the widely used assumption of homologous nebula expansion to map spectroscopically measured velocity to position along the line of sight.

  8. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  9. Static & Dynamic Response of 3D Solids

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  10. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  11. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  12. Extra Dimensions: 3D and Time in PDF Documentation

    SciTech Connect

    Graf, N.A.; /SLAC

    2012-04-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  13. Extra dimensions: 3D and time in PDF documentation

    NASA Astrophysics Data System (ADS)

    Graf, N. A.

    2011-01-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.