Science.gov

Sample records for 3d display systems

  1. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  2. An interactive multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  3. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  4. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  5. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  6. Implementation of active-type Lamina 3D display system.

    PubMed

    Yoon, Sangcheol; Baek, Hogil; Min, Sung-Wook; Park, Soon-Gi; Park, Min-Kyu; Yoo, Seong-Hyeon; Kim, Hak-Rin; Lee, Byoungho

    2015-06-15

    Lamina 3D display is a new type of multi-layer 3D display, which utilizes the polarization state as a new dimension of depth information. Lamina 3D display system has advanced properties - to reduce the data amount representing 3D image, to be easily made using the conventional projectors, and to have a potential being applied to the many applications. However, the system might have some limitations in depth range and viewing angle due to the properties of the expressive volume components. In this paper, we propose the volume using the layers of switchable diffusers to implement the active-type Lamina 3D display system. Because the diffusing rate of the layers has no relation with the polarization state, the polarizer wheel is applied to the proposed system in purpose of making the sectioned image synchronized with the diffusing layer at the designated location. The imaging volume of the proposed system consists of five layers of polymer dispersed liquid crystal and the total size of the implemented volume is 24x18x12 mm3(3). The proposed system can achieve the improvements of viewing qualities such as enhanced depth expression and widened viewing angle.

  7. Autonomic nervous system responses can reveal visual fatigue induced by 3D displays.

    PubMed

    Kim, Chi Jung; Park, Sangin; Won, Myeung Ju; Whang, Mincheol; Lee, Eui Chul

    2013-09-26

    Previous research has indicated that viewing 3D displays may induce greater visual fatigue than viewing 2D displays. Whether viewing 3D displays can evoke measureable emotional responses, however, is uncertain. In the present study, we examined autonomic nervous system responses in subjects viewing 2D or 3D displays. Autonomic responses were quantified in each subject by heart rate, galvanic skin response, and skin temperature. Viewers of both 2D and 3D displays showed strong positive correlations with heart rate, which indicated little differences between groups. In contrast, galvanic skin response and skin temperature showed weak positive correlations with average difference between viewing 2D and 3D. We suggest that galvanic skin response and skin temperature can be used to measure and compare autonomic nervous responses in subjects viewing 2D and 3D displays.

  8. Design and Perception Testing of a Novel 3-D Autostereoscopic Holographic Display System

    DTIC Science & Technology

    1999-01-01

    developing an autostereoscopic , 3D holographic visual display system. The current holographic system is being used to conduct 3D visual perception studies...Design and Perception Testing of a Novel 3-D Autostereoscopic Holographic Display System Grace M. Bochenek a, Thomas J. Meitzler b, Paul Muench...Warren, MI 48397-5000 ABSTRACT U.S. Army Tank-Automotive Command (TACOM) researchers are in the early stages of developing an autostereoscopic

  9. Data acquirement and remodeling on volumetric 3D emissive display system

    NASA Astrophysics Data System (ADS)

    Yao, Yi; Liu, Xu; Lin, Yuanfang; Zhang, Huangzhu; Zhang, Xiaojie; Liu, Xiangdong

    2005-01-01

    Since present display technology is projecting 3D to 2D, people's eyes are deceived by the loss of spatial data. So it's a revolution for human vision to develop a real 3D display device. The monitor is based on emissive pad with 64*256 LED array. When rotated at a frequency of 10 Hertz, it shows real 3D images with pixels at their exact positions. The article presents a procedure that the software possesses 3D object and converts to volumetric 3D formatted data for this system. For simulating the phenomenon on PC, it also presents a program remodels the object based on OpenGL. An algorithm for faster processing and optimizing rendering speed is also given. The monitor provides real 3D scenes with free visual angle. It can be expected that the revolution will bring a strike on modern monitors and will lead to a new world for display technology.

  10. Air-touch interaction system for integral imaging 3D display

    NASA Astrophysics Data System (ADS)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  11. Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array.

    PubMed

    Choi, Heejin; Park, Jae-Hyeung; Kim, Joohwan; Cho, Seong-Woo; Lee, Byoungho

    2005-10-17

    A wide-viewing-angle 3D/2D convertible display system with a thin structure is proposed that is able to display three-dimensional and two-dimensional images. With the use of a transparent display device in front of a conventional integral imaging system, it is possible to display planar images using the conventional system as a backlight source. By experiments, the proposed method is proven and compared with the conventional one.

  12. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  13. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  14. Crosstalk reduction in auto-stereoscopic projection 3D display system.

    PubMed

    Lee, Kwang-Hoon; Park, Youngsik; Lee, Hyoung; Yoon, Seon Kyu; Kim, Sung-Kyu

    2012-08-27

    In auto-stereoscopic multi-views 3D display systems, the crosstalk and low resolution become problems for taking a clear depth image with the sufficient motion parallax. To solve these problems, we propose the projection-type auto-stereoscopic multi-view 3D display system, in which the hybrid optical system with the lenticular-parallax barrier and multi projectors. Condensing width of the projected unit-pixel image within the lenslet by hybrid optics is the core concept in this proposal. As the result, the point crosstalk is improved 53% and resolution is increased up to 5 times.

  15. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  16. Full-color autostereoscopic 3D display system using color-dispersion-compensated synthetic phase holograms.

    PubMed

    Choi, Kyongsik; Kim, Hwi; Lee, Byoungho

    2004-10-18

    A novel full-color autostereoscopic three-dimensional (3D) display system has been developed using color-dispersion-compensated (CDC) synthetic phase holograms (SPHs) on a phase-type spatial light modulator. To design the CDC phase holograms, we used a modified iterative Fourier transform algorithm with scaling constants and phase quantization level constraints. We obtained a high diffraction efficiency (~90.04%), a large signal-to-noise ratio (~9.57dB), and a low reconstruction error (~0.0011) from our simulation results. Each optimized phase hologram was synthesized with each CDC directional hologram for red, green, and blue wavelengths for full-color autostereoscopic 3D display. The CDC SPHs were composed and modulated by only one phase-type spatial light modulator. We have demonstrated experimentally that the designed CDC SPHs are able to generate full-color autostereoscopic 3D images and video frames very well, without any use of glasses.

  17. Compact multi-projection 3D display system with light-guide projection.

    PubMed

    Lee, Chang-Kun; Park, Soon-gi; Moon, Seokil; Hong, Jong-Young; Lee, Byoungho

    2015-11-02

    We propose a compact multi-projection based multi-view 3D display system using an optical light-guide, and perform an analysis of the characteristics of the image for distortion compensation via an optically equivalent model of the light-guide. The projected image traveling through the light-guide experiences multiple total internal reflections at the interface. As a result, the projection distance in the horizontal direction is effectively reduced to the thickness of the light-guide, and the projection part of the multi-projection based multi-view 3D display system is minimized. In addition, we deduce an equivalent model of such a light-guide to simplify the analysis of the image distortion in the light-guide. From the equivalent model, the focus of the image is adjusted, and pre-distorted images for each projection unit are calculated by two-step image rectification in air and the material. The distortion-compensated view images are represented on the exit surface of the light-guide when the light-guide is located in the intended position. Viewing zones are generated by combining the light-guide projection system, a vertical diffuser, and a Fresnel lens. The feasibility of the proposed method is experimentally verified and a ten-view 3D display system with a minimized structure is implemented.

  18. Key factors in the design of a LED volumetric 3D display system

    NASA Astrophysics Data System (ADS)

    Lin, Yuanfang; Liu, Xu; Yao, Yi; Zhang, Xiaojie; Liu, Xiangdong; Lin, Fengchun

    2005-01-01

    Through careful consideration of key factors that impact upon voxel attributes and image quality, a volumetric three-dimensional (3D) display system employing the rotation of a two-dimensional (2D) thin active panel was developed. It was designed as a lower-cost 3D visualization platform for experimentation and demonstration. Light emitting diodes (LEDs) were arranged into a 256x64 dot matrix on a single surface of the panel, which was positioned symmetrically about the axis of rotation. The motor and necessary supporting structures were located below the panel. LEDs individually of 500 ns response time, 1.6 mm×0.8 mm×0.6 mm external dimensions, 0.38 mm×0.43 mm horizontal and vertical spacing were adopted. The system is functional, providing 512×256×64, i.e. over 8 million addressable voxels within a 292 mm×165 mm cylindrical volume at a refresh frequency in excess of 16 Hz. Due to persistence of vision, momentarily addressed voxels will be perceived and fused into a 3D image. Many static or dynamic 3D scenes were displayed, which can be directly viewed from any position with few occlusion zones and dead zones. Important depth cues like binocular disparity and motion parallax are satisfied naturally.

  19. 2D/3D switchable displays

    NASA Astrophysics Data System (ADS)

    Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.

    2006-02-01

    A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.

  20. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  1. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  2. Spatioangular Prefiltering for Multiview 3D Displays.

    PubMed

    Ramachandra, Vikas; Hirakawa, Keigo; Zwicker, Matthias; Nguyen, Truong

    2011-05-01

    In this paper, we analyze the reproduction of light fields on multiview 3D displays. A three-way interaction between the input light field signal (which is often aliased), the joint spatioangular sampling grids of multiview 3D displays, and the interview light leakage in modern multiview 3D displays is characterized in the joint spatioangular frequency domain. Reconstruction of light fields by all physical 3D displays is prone to light leakage, which means that the reconstruction low-pass filter implemented by the display is too broad in the angular domain. As a result, 3D displays excessively attenuate angular frequencies. Our analysis shows that this reduces sharpness of the images shown in the 3D displays. In this paper, stereoscopic image recovery is recast as a problem of joint spatioangular signal reconstruction. The combination of the 3D display point spread function and human visual system provides the narrow-band low-pass filter which removes spectral replicas in the reconstructed light field on the multiview display. The nonideality of this filter is corrected with the proposed prefiltering. The proposed light field reconstruction method performs light field antialiasing as well as angular sharpening to compensate for the nonideal response of the 3D display. The union of cosets approach which has been used earlier by others is employed here to model the nonrectangular spatioangular sampling grids on a multiview display in a generic fashion. We confirm the effectiveness of our approach in simulation and in physical hardware, and demonstrate improvement over existing techniques.

  3. Holographic display system for dynamic synthesis of 3D light fields with increased space bandwidth product.

    PubMed

    Agour, Mostafa; Falldorf, Claas; Bergmann, Ralf B

    2016-06-27

    We present a new method for the generation of a dynamic wave field with high space bandwidth product (SBP). The dynamic wave field is generated from several wave fields diffracted by a display which comprises multiple spatial light modulators (SLMs) each having a comparably low SBP. In contrast to similar approaches in stereoscopy, we describe how the independently generated wave fields can be coherently superposed. A major benefit of the scheme is that the display system may be extended to provide an even larger display. A compact experimental configuration which is composed of four phase-only SLMs to realize the coherent combination of independent wave fields is presented. Effects of important technical parameters of the display system on the wave field generated across the observation plane are investigated. These effects include, e.g., the tilt of the individual SLM and the gap between the active areas of multiple SLMs. As an example of application, holographic reconstruction of a 3D object with parallax effects is demonstrated.

  4. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  5. Cylindrical liquid crystal lenses system for autostereoscopic 2D/3D display

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Wei; Huang, Yi-Pai; Chang, Yu-Cheng; Wang, Po-Hao; Chen, Po-Chuan; Tsai, Chao-Hsu

    2012-06-01

    The liquid crystal lenses system, which could be electrically controlled easily for autostereoscopic 2D/3D switchable display was proposed. The High-Resistance liquid crystal (HRLC) lens utilized less controlled electrodes and coated a high-resistance layer between the controlled-electrodes was proposed and was used in this paper. Compare with the traditional LC lens, the HR-LC Lens could provide smooth electric-potential distribution within the LC layer under driving status. Hence, the proposed HR-LC Lens had less circuit complexity, low driving voltage, and good optical performance also could be obtained. In addition, combining with the proposed driving method called dual-directional overdriving method, the above method could reduce the switching time by applying large voltage onto cell. Consequently, the total switching time could be further reduced to around 2second. It is believed that the LC lens system has high potential in the future.

  6. 3D display and image processing system for metal bellows welding

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Son, Jung-Young

    2010-04-01

    Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.

  7. Full parallax viewing-angle enhanced computer-generated holographic 3D display system using integral lens array.

    PubMed

    Choi, Kyongsik; Kim, Joohwan; Lim, Yongjun; Lee, Byoungho

    2005-12-26

    A novel full parallax and viewing-angle enhanced computer-generated holographic (CGH) three-dimensional (3D) display system is proposed and implemented by combining an integral lens array and colorized synthetic phase holograms displayed on a phase-type spatial light modulator. For analyzing the viewing-angle limitations of our CGH 3D display system, we provide some theoretical background and introduce a simple ray-tracing method for 3D image reconstruction. From our method we can get continuously varying full parallax 3D images with the viewing angle about +/-6 degrees . To design the colorized phase holograms, we used a modified iterative Fourier transform algorithm and we could obtain a high diffraction efficiency (~92.5%) and a large signal-to-noise ratio (~11dB) from our simulation results. Finally we show some experimental results that verify our concept and demonstrate the full parallax viewing-angle enhanced color CGH display system.

  8. Measurement of Contrast Ratios for 3D Display

    DTIC Science & Technology

    2000-07-01

    stereoscopic, autostereoscopic , 3D , display ABSTRACT 3D image display devices have wide applications in medical and entertainment areas. Binocular (stereoscopic...and system crosstalk. In many 3D display systems viewer’ crosstalk is an important issue for good performance, especial in autostereoscopic display...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11343 TITLE: Measurement of Contrast Ratios for 3D Display

  9. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  10. Analysis of multiple recording methods for full resolution multi-view autostereoscopic 3D display system incorporating VHOE

    NASA Astrophysics Data System (ADS)

    Hwang, Yong Seok; Cho, Kyu Ha; Kim, Eun Soo

    2014-03-01

    In this paper, we propose multiple recording process of photopolymer for a full-color multi-view including multiple-view auto-stereoscopic 3D display system based on VHOE (Volume Holographic Optical Element). To overcome the problems such as low resolution, and limited viewing zone of conventional 3D-display without glasses, we designed multiple recording condition of VHOE for multi-view display. It is verified that VHOE may be optically made by angle-multiplexed recording of pre-designed multiple-viewing zone that uniformly is recorded through optimized exposuretime scheduling scheme. Here, VHOE-based backlight system for 4-view stereoscopic display is implemented, in which the output beams that playing a role reference beam from LGP(Light guide plate)t may be sequentially synchronized with the respective stereo images displayed on the LCD panel.

  11. Design of monocular multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2001-06-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have developed a 3D HMD system using the monocular stereoscopic display. This paper shows that the 3D vision system using the monocular stereoscopic display and capturing camera builds a 3D virtual space for a telemanipulation using a captured real 3D image. In this paper, we propose the monocular stereoscopic 3D display and capturing camera for a tele- manipulation system. In addition, we describe the result of depth estimation using the multi-focus retinal images.

  12. Reality and Surreality of 3-D Displays: Holodeck and Beyond

    DTIC Science & Technology

    2000-01-01

    Holodeck is the reality that significantly better 3D display systems are possible. Keywords: true 3D displays, multiplexed 2D display ( autostereoscopic ...displays still do not use them in their own offices. Thus, 3D approaches that are autostereoscopic (that is, no-head gear is required) are preferred. A...challenges noted throughout the aforegoing sections of this paper will be steadily overcome. True 3D , autostereoscopic (no head gear) monitors with usable

  13. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  14. Depth-expression characteristics of multi-projection 3D display systems [invited].

    PubMed

    Park, Soon-gi; Hong, Jong-Young; Lee, Chang-Kun; Miranda, Matheus; Kim, Youngmin; Lee, Byoungho

    2014-09-20

    A multi-projection display consists of multiple projection units. Because of the large amount of data, a multi-projection system shows large, high-quality images. According to the projection geometry and the optical configuration, multi-projection systems show different viewing characteristics for generated three-dimensional images. In this paper, we analyzed the various projection geometries of multi-projection systems, and explained the different depth-expression characteristics for each individual projection geometry. We also demonstrated the depth-expression characteristic of an experimental multi-projection system.

  15. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  16. Spectroradiometric characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    Spectroradiometric measurements have been made for the experimental characterization of the RGB channels of autostereoscopic 3D displays, giving results for different measurement angles with respect to the normal direction of the plane of the display. In the study, 2 different models of autostereoscopic 3D displays of different sizes and resolutions were used, making measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). From the measurements made, goniometric results were recorded for luminance contrast, and the fundamental hypotheses have been evaluated for the characterization of the displays: independence of the RGB channels and their constancy. The results show that the display with the lower angle variability in the contrast-ratio value and constancy of the chromaticity coordinates nevertheless presented the greatest additivity deviations with the measurement angle. For both displays, when the parameters evaluated were taken into account, lower angle variability consistently resulted in the 2D mode than in the 3D mode.

  17. Photorefractive Polymers for Updateable 3D Displays

    DTIC Science & Technology

    2010-02-24

    Final Performance Report 3. DATES COVERED (From - To) 01-01-2007 to 11-30-2009 4. TITLE AND SUBTITLE Photorefractive Polymers for Updateable 3D ...ABSTRACT During the tenure of this project a large area updateable 3D color display has been developed for the first time using a new co-polymer...photorefractive polymers have been demonstrated. Moreover, a 6 inch × 6 inch sample was fabricated demonstrating the feasibility of making large area 3D

  18. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc.

  19. Improved Second-Generation 3-D Volumetric Display System. Revision 2

    DTIC Science & Technology

    1998-10-01

    2 mm 2 Watt The factor of 0.7 is used here to account for the 5 14-nm laser wavelength instead of the 555-nm peak of the photopic curve . For a spot...lasers over a 40-minute time period. The spikes in the curves are due to a defective power meter and are not real. The Coherent had virtually single...visible three-dimensional images. A primary element in the helical display system is a rotating helically curved screen, referred to as the "helix

  20. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  1. Bright 3D display, native and integrated on-chip or system-level

    NASA Astrophysics Data System (ADS)

    Ellwood, Sutherland C., Jr.

    2011-06-01

    Photonica, Inc. has pioneered the use of magneto-optics and hybrid technologies in visual display systems to create arrays addressing hi-speed, solid-state modulators up to 1K times faster that DMD/DLP, yielding high frame-rate and extremely high net native resolution allowing for full-duplication of right eye and left eye modulators at 1080p, DCI 2K, 4K and other specified resolution requirements. The technology enables high-transmission (brightness) per frame. In one version, each integrated image-engine assembly processes binocular frames simultaneously, employing simultaneous right eye/left eye channels, either polarization-based or "Infitec" color-band based channels, as well as pixel-vector based systems. In another version, a multi-chip, massively parallel signal-processing architecture integrates pixel-signal channels to yield simultaneous binocular frames. This may be combined with on-chip integration. Channels are integrated either through optics elements on-chip or through fiber network or both.

  2. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  3. Automated System for Holographic Lightfield 3D Display Metrology (HL3DM)

    DTIC Science & Technology

    2015-04-01

    for example an array of lenses. Figure 3 shows examples of rays that are hitting two types of surfaces: (a) Diffused surface (left side), which is...color-photometer that has focusing optics. 2.8.4 Array Detectors (Cameras). (a) Photometric cameras will be the most useful instrument for the type of...displays that we intend to measure. (b) This includes cameras with multiple sensors array , of any of the commercial technology (CCD, CMOS, etc

  4. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    NASA Astrophysics Data System (ADS)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  5. Upper Limb-Hand 3D Display System for Biomimetic Myoelectric Hand Simulator

    DTIC Science & Technology

    2009-02-13

    the external palate of the humeral. In order to obtain the position of the wrist even during the external pronation of the arm, 8 LEDs Z ( . were...amputees using myoelectric and conventional prosthesis", in Arch . Ph.vs. Med Rehabil. 64, 243-248 (1983). [3] M. Nider, "The artificial substitution of...AbuI-Haj and N. Hogan, "An Emulator System for Developing Improved Elbow-Prosthesis Designs", in IEEE Trans . Biomed. Eng., 34, 724-737, (1987). [11

  6. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  7. 3D optical see-through head-mounted display based augmented reality system and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenliang; Weng, Dongdong; Liu, Yue; Xiang, Li

    2015-07-01

    The combination of health and entertainment becomes possible due to the development of wearable augmented reality equipment and corresponding application software. In this paper, we implemented a fast calibration extended from SPAAM for an optical see-through head-mounted display (OSTHMD) which was made in our lab. During the calibration, the tracking and recognition techniques upon natural targets were used, and the spatial corresponding points had been set in dispersed and well-distributed positions. We evaluated the precision of this calibration, in which the view angle ranged from 0 degree to 70 degrees. Relying on the results above, we calculated the position of human eyes relative to the world coordinate system and rendered 3D objects in real time with arbitrary complexity on OSTHMD, which accurately matched the real world. Finally, we gave the degree of satisfaction about our device in the combination of entertainment and prevention of cervical vertebra diseases through user feedbacks.

  8. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  9. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  10. Depth-fused 3D imagery on an immaterial display.

    PubMed

    Lee, Cha; Diverdi, Stephen; Höllerer, Tobias

    2009-01-01

    We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The feasibility of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using one or two immaterial screens to create an unencumbered 3D visual that users can penetrate, examining the potential for direct walk-through and reach-through manipulation of the 3D scene. We evaluate the prototype system in formative and summative user studies and report the tolerance thresholds discovered for both tracking and projector errors.

  11. Research of 3D display using anamorphic optics

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kenji; Honda, Toshio

    1997-05-01

    This paper describes the auto-stereoscopic display which can reconstruct more reality and viewer friendly 3-D image by increasing the number of parallaxes and giving motion parallax horizontally. It is difficult to increase number of parallaxes to give motion parallax to the 3-D image without reducing the resolution, because the resolution of display device is insufficient. The magnification and the image formation position can be selected independently in horizontal direction and the vertical direction by projecting between the display device and the 3-D image with the anamorphic optics. The anamorphic optics is an optics system with different magnification in horizontal direction and the vertical direction. It consists of the combination of cylindrical lenses with different focal length. By using this optics, even if we use a dynamic display such as liquid crystal display (LCD), it is possible to display the realistic 3-D image having motion parallax. Motion parallax is obtained by assuming width of the single parallax at the viewing position to be about the same size as the pupil diameter of viewer. In addition, because the focus depth of the 3-D image is deep in this method, conflict of accommodation and convergence is small, and natural 3-D image can be displayed.

  12. Multi-view 3D display using waveguides

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Chang-Kun

    2015-07-01

    We propose a multi-projection based multi-view 3D display system using an optical waveguide. The images from the projection units with the angle satisfying the total internal reflection (TIR) condition are incident on the waveguide and experience multiple reflections at the interface by the TIR. As a result of the multiple reflections in the waveguide, the projection distance in horizontal direction is effectively reduced to the thickness of the waveguide, and it is possible to implement the compact projection display system. By aligning the projection array in the entrance part of the waveguide, the multi-view 3D display system based on the multiple projectors with the minimized structure is realized. Viewing zones are generated by combining the waveguide projection system, a vertical diffuser, and a Fresnel lens. In the experimental setup, the feasibility of the proposed method is verified and a ten-view 3D display system with compact size in projection space is implemented.

  13. Rear-cross-lenticular 3D display without eyeglasses

    NASA Astrophysics Data System (ADS)

    Morishima, Hideki; Nose, Hiroyasu; Taniguchi, Naosato; Inoguchi, Kazutaka; Matsumura, Susumu

    1998-04-01

    We have developed a prototype 3D Display system without any eyeglasses, which we call `Rear Cross Lenticular 3D Display' (RCL3D), that is very compact and produces high quality 3D image. The RCL3D consists of a LCD panel, two lenticular lens sheets which run perpendicular to each other, a Checkered Pattern Mask and a backlight panel. On the LCD panel, a composite image which consists of alternately arranged horizontally striped images for right eye and left eye, is displayed. This composite image form is compatible with the field sequential stereoscopic image data. The light from backlight panel goes through the apertures of the Checkered Pattern Mask and illuminates the horizontal lines of images for right eye and left eye on LCD and goes to the right eye position and left eye position separately by the function of the two lenticular lens sheets. With this principle, the RCL3D shows 3D image to an observer without any eyeglasses. We applied simulation of viewing zone, using random ray tracing to the RCL3D and found that illuminated areas for right eye and left eye are separated clearly as series of alternate vertical stripes. We will present the prototype of the RCL3D (14.5', XGA) and simulation results.

  14. A zero-footprint 3D visualization system utilizing mobile display technology for timely evaluation of stroke patients

    NASA Astrophysics Data System (ADS)

    Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent

    2010-03-01

    When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.

  15. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    NASA Astrophysics Data System (ADS)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  16. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  17. Analysis of temporal stability of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    An analysis has been made of the stability of the images generated by electronic autostereoscopic 3D displays, studying the time course of the photometric and colorimetric parameters. The measurements were made on the basis of the procedure recommended in the European guideline EN 61747-6 for the characterization of electronic liquid-crystal displays (LCD). The study uses 3 different models of autostereoscopic 3D displays of different sizes and numbers of pixels, taking the measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). For each of the displays, the time course is shown for the tristimulus values and the chromaticity coordinates in the XYZ CIE 1931 system and values from the time periods required to reach stable values of these parameters are presented. For the analysis of how the procedure recommended in the guideline EN 61747-6 for 2D displays influenced the results, and for the adaption of the procedure to the characterization of 3D displays, the experimental conditions of the standard procedure were varied, making the stability analysis in the two ocular channels (RE and LE) of the 3D mode and comparing the results with those corresponding to the 2D. The results of our study show that the stabilization time of a autostereoscopic 3D display with parallax barrier technology depends on the tristimulus value analysed (X, Y, Z) as well as on the presentation mode (2D, 3D); furthermore, it was found that whether the 3D mode is used depends on the ocular channel evaluated (RE, LE).

  18. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  19. 3D touchable holographic light-field display.

    PubMed

    Yamaguchi, Masahiro; Higashida, Ryo

    2016-01-20

    We propose a new type of 3D user interface: interaction with a light field reproduced by a 3D display. The 3D display used in this work reproduces a 3D light field, and a real image can be reproduced in midair between the display and the user. When using a finger to touch the real image, the light field from the display will scatter. Then, the 3D touch sensing is realized by detecting the scattered light by a color camera. In the experiment, the light-field display is constructed with a holographic screen and a projector; thus, a preliminary implementation of a 3D touch is demonstrated.

  20. 3D electrohydrodynamic simulation of electrowetting displays

    NASA Astrophysics Data System (ADS)

    Hsieh, Wan-Lin; Lin, Chi-Hao; Lo, Kuo-Lung; Lee, Kuo-Chang; Cheng, Wei-Yuan; Chen, Kuo-Ching

    2014-12-01

    The fluid dynamic behavior within a pixel of an electrowetting display (EWD) is thoroughly investigated through a 3D simulation. By coupling the electrohydrodynamic (EHD) force deduced from the Maxwell stress tensor with the laminar phase field of the oil-water dual phase, the complete switch processes of an EWD, including the break-up and the electrowetting stages in the switch-on process (with voltage) and the oil spreading in the switch-off process (without voltage), are successfully simulated. By considering the factor of the change in the apparent contact angle at the contact line, the electro-optic performance obtained from the simulation is found to agree well with its corresponding experiment. The proposed model is used to parametrically predict the effect of interfacial (e.g. contact angle of grid) and geometric (e.g. oil thickness and pixel size) properties on the defects of an EWD, such as oil dewetting patterns, oil overflow, and oil non-recovery. With the help of the defect analysis, a highly stable EWD is both experimentally realized and numerically analyzed.

  1. ePlant and the 3D Data Display Initiative: Integrative Systems Biology on the World Wide Web

    PubMed Central

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J.

    2011-01-01

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed “ePlant” (http://bar.utoronto.ca/eplant) – a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the “3D Data Display Initiative” (http://3ddi.org). PMID:21249219

  2. ePlant and the 3D data display initiative: integrative systems biology on the world wide web.

    PubMed

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J

    2011-01-10

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).

  3. User benefits of visualization with 3-D stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Wichansky, Anna M.

    1991-08-01

    The power of today''s supercomputers promises tremendous benefits to users in terms of productivity, creativity, and excitement in computing. A study of a stereoscopic display system for computer workstations was conducted with 20 users and third-party software developers, to determine whether 3-D stereo displays were perceived as better than flat, 2- 1/2D displays. Users perceived more benefits of 3-D stereo in applications such as molecular modeling and cell biology, which involved viewing of complex, abstract, amorphous objects. Users typically mentioned clearer visualization and better understanding of data, easier recognition of form and pattern, and more fun and excitement at work as the chief benefits of stereo displays. Human factors issues affecting the usefulness of stereo included use of 3-D glasses over regular eyeglasses, difficulties in group viewing, lack of portability, and need for better input devices. The future marketability of 3-D stereo displays would be improved by eliminating the need for users to wear equipment, reducing cost, and identifying markets where the abstract display value can be maximized.

  4. Computational challenges of emerging novel true 3D holographic displays

    NASA Astrophysics Data System (ADS)

    Cameron, Colin D.; Pain, Douglas A.; Stanley, Maurice; Slinger, Christopher W.

    2000-11-01

    A hologram can produce all the 3D depth cues that the human visual system uses to interpret and perceive real 3D objects. As such it is arguably the ultimate display technology. Computer generated holography, in which a computer calculates a hologram that is then displayed using a highly complex modulator, combines the ultimate qualities of a traditional hologram with the dynamic capabilities of a computer display producing a true 3D real image floating in space. This technology is set to emerge over the next decade, potentially revolutionizing application areas such as virtual prototyping (CAD-CAM, CAID etc.), tactical information displays, data visualization and simulation. In this paper we focus on the computational challenges of this technology. We consider different classes of computational algorithms from true computer-generated holograms (CGH) to holographic stereograms. Each has different characteristics in terms of image qualities, computational resources required, total CGH information content, and system performance. Possible trade- offs will be discussed including reducing the parallax. The software and hardware architectures used to implement the CGH algorithms have many possible forms. Different schemes, from high performance computing architectures to graphics based cluster architectures will be discussed and compared. Assessment will be made of current and future trends looking forward to a practical dynamic CGH based 3D display.

  5. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  6. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  7. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  8. 3-D Imagery Cockpit Display Development

    DTIC Science & Technology

    1990-08-01

    display. is needed. Good information - (3) Change from pictorial gauges to difficult to interpret. word warnings. Display EGT & OIL indicators at all times...indicator. Popped CBs. Information to be changed : Comments: (5) Nothing needs to be changed . Great format. (2) Standardize colors. Display is good. Use all ...sense? Any suggestions for changes ? 6 Pilots Good. 5 Pilots Great! Don’t change the format. 1 Pilot Stores part great. 1 Pilot Provides all the necessary

  9. Interactive 3D display simulator for autostereoscopic smart pad

    NASA Astrophysics Data System (ADS)

    Choe, Yeong-Seon; Lee, Ho-Dong; Park, Min-Chul; Son, Jung-Young; Park, Gwi-Tae

    2012-06-01

    There is growing interest of displaying 3D images on a smart pad for entertainments and information services. Designing and realizing various types of 3D displays on the smart pad is not easy for costs and given time. Software simulation can be an alternative method to save and shorten the development. In this paper, we propose a 3D display simulator for autostereoscopic smart pad. It simulates light intensity of each view and crosstalk for smart pad display panels. Designers of 3D display for smart pad can interactively simulate many kinds of autostereoscopic displays interactively by changing parameters required for panel design. Crosstalk to reduce leakage of one eye's image into the image of the other eye, and light intensity for computing visual comfort zone are important factors in designing autostereoscopic display for smart pad. Interaction enables intuitive designs. This paper describes an interactive 3D display simulator for autostereoscopic smart pad.

  10. Analysis of the real-time 3D display system based on the reconstruction of parallax rays

    NASA Astrophysics Data System (ADS)

    Yamada, Kenji; Takahashi, Hideya; Shimizu, Eiji

    2002-11-01

    Several types of auto-stereoscopic display systems have been developed. We also have developed a real-time color auto-stereoscopic display system using a reconstruction method of parallax rays. Our system consists of an optical element (such as lens array, a pinhole, a HOEs and so on), a spatial light modulator (SLM), and an image-processing unit. On our system, it is not probability to appear pseudoscopic image. The algorithm for solving this problem is processed in an image-processing unit. The resolution limitation of IP has studied by Hoshino, Burckhardt, and Okoshi. They designed the optimum width of the lens or the aperture. However, we cannot apply those theories to our system. Therefore, we consider not only the spatial frequency measured at the viewpoint but the performance of our system. In this paper, we describe an analysis of resolution for our system. The first we consider the spatial frequency along the depth and the horizontal direction respectively according to the geometrical optics and wave optics. The next we study the performance of our system. Especially, we esitmate the cross talk that the point sources from pixels on an SLM cause by considering to the geometrical optics and the wave optics.

  11. TransCAIP: A Live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters.

    PubMed

    Taguchi, Yuichi; Koike, Takafumi; Takahashi, Keita; Naemura, Takeshi

    2009-01-01

    The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.

  12. Irregular Grid Generation and Rapid 3D Color Display Algorithm

    SciTech Connect

    Wilson D. Chin, Ph.D.

    2000-05-10

    Computationally efficient and fast methods for irregular grid generation are developed to accurately characterize wellbore and fracture boundaries, and farfield reservoir boundaries, in oil and gas petroleum fields. Advanced reservoir simulation techniques are developed for oilfields described by such ''boundary conforming'' mesh systems. Very rapid, three-dimensional color display algorithms are also developed that allow users to ''interrogate'' 3D earth cubes using ''slice, rotate, and zoom'' functions. Based on expert system ideas, the new methods operate much faster than existing display methodologies and do not require sophisticated computer hardware or software. They are designed to operate with PC based applications.

  13. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  14. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  15. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  16. Will true 3d display devices aid geologic interpretation. [Mirage

    SciTech Connect

    Nelson, H.R. Jr.

    1982-04-01

    A description is given of true 3D display devices and techniques that are being evaluated in various research laboratories around the world. These advances are closely tied to the expected application of 3D display devices as interpretational tools for explorationists. 34 refs.

  17. 30-view projection 3D display

    NASA Astrophysics Data System (ADS)

    Huang, Junejei; Wang, Yuchang

    2015-03-01

    A 30-view auto-stereoscopic display using angle-magnifying screen is proposed. Small incident angle of Lamp-scanning from exit pupil of projection lens is magnified into large field of view on the observing side. The lamp-scanning is realized by the vibration of Galvano-mirror that synchronizing with the frame rate of the DMD and reflecting the laser illuminator to the scanning angles. To achieve 15-view, a 3-chip DLP projector with frame rate of 720 Hz is used. For one cycle of vibration of Galvano-mirror, steps of 0, 2, 4, 6, 8 10, 12, 14 are reflected on going-path and steps of 13, 11, 9, 7, 5, 3, 1 are reflected on returning path. A frame is divided into two half parts of odd lines and even lines for two views. For each view, 48 half frames per second are provided. A projection lens with aperture-relay module is used to double the lens aperture and separating the frame into two half parts of even and odd lines. After going through the Philips prism, three panels, the scanning 15 spots are doubled to 30 spots and emerge from the exit pupil of the projection lens. The exit 30 light spots from the projection lens are projected to 30 viewing zones by the anglemagnifying screen. A cabinet of rear projection with two folded mirrors is used because a projection lens of long throw distance is required.

  18. Auto-stereoscopic 3D displays with reduced crosstalk.

    PubMed

    Lee, Chulhee; Seo, Guiwon; Lee, Jonghwa; Han, Tae-hwan; Park, Jong Geun

    2011-11-21

    In this paper, we propose new auto-stereoscopic 3D displays that substantially reduce crosstalk. In general, it is difficult to eliminate crosstalk in auto-stereoscopic 3D displays. Ideally, the parallax barrier can eliminate crosstalk for a single viewer at the ideal position. However, due to variations in the viewing distance and the interpupillary distance, crosstalk is a problem in parallax barrier displays. In this paper, we propose 3-dimensional barriers, which can significantly reduce crosstalk.

  19. Development of a 3D pixel module for an ultralarge screen 3D display

    NASA Astrophysics Data System (ADS)

    Hashiba, Toshihiko; Takaki, Yasuhiro

    2004-10-01

    A large screen 2D display used at stadiums and theaters consists of a number of pixel modules. The pixel module usually consists of 8x8 or 16x16 LED pixels. In this study we develop a 3D pixel module in order to construct a large screen 3D display which is glass-free and has the motion parallax. This configuration for a large screen 3D display dramatically reduces the complexity of wiring 3D pixels. The 3D pixel module consists of several LCD panels, several cylindrical lenses, and one small PC. The LCD panels are slanted in order to differentiate the distances from same color pixels to the axis of the cylindrical lens so that the rays from the same color pixels are refracted into the different horizontal directions by the cylindrical lens. We constructed a prototype 3D pixel module, which consists of 8x4 3D pixels. The prototype module is designed to display 300 different patterns into different horizontal directions with the horizontal display angle pitch of 0.099 degree. The LCD panels are controlled by a small PC and the 3D image data is transmitted through the Gigabit Ethernet.

  20. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  1. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  2. Format for Interchange and Display of 3D Terrain Data

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Powell, Mark; Vona, Marsette; Norris, Jeffrey; Morrison, Jack

    2004-01-01

    Visible Scalable Terrain (ViSTa) is a software format for production, interchange, and display of three-dimensional (3D) terrain data acquired by stereoscopic cameras of robotic vision systems. ViSTa is designed to support scalability of data, accuracy of displayed terrain images, and optimal utilization of computational resources. In a ViSTa file, an area of terrain is represented, at one or more levels of detail, by coordinates of isolated points and/or vertices of triangles derived from a texture map that, in turn, is derived from original terrain images. Unlike prior terrain-image software formats, ViSTa includes provisions to ensure accuracy of texture coordinates. Whereas many such formats are based on 2.5-dimensional terrain models and impose additional regularity constraints on data, ViSTa is based on a 3D model without regularity constraints. Whereas many prior formats require external data for specifying image-data coordinate systems, ViSTa provides for the inclusion of coordinate-system data within data files. ViSTa admits highspeed loading and display within a Java program. ViSTa is designed to minimize file sizes and maximize compressibility and to support straightforward reduction of resolution to reduce file size for Internet-based distribution.

  3. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  4. 3D display based on parallax barrier with multiview zones.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wang, Jun

    2014-03-01

    A 3D display based on a parallax barrier with multiview zones is proposed. This display consists of a 2D display panel and a parallax barrier. The basic element of the parallax barrier has three narrow slits. They can show three columns of subpixels on the 2D display panel and form 3D pixels. The parallax barrier can provide multiview zones. In these multiview zones, the proposed 3D display can use a small number of views to achieve a high density of views. Therefore, the distance between views is the same as the conventional ones with more views. Considering the proposed display has fewer views, which bring more 3D pixels in the 3D images, the resolution and brightness will be higher than the conventional ones. A 12-view prototype of the proposed 3D display is developed, and it provides the same density of views as a conventional one with 28 views. Experimental results show the proposed display has higher resolution and brightness than the conventional one. The cross talk is also limited at a low level.

  5. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  6. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  7. Real-time hardware for a new 3D display

    NASA Astrophysics Data System (ADS)

    Kaufmann, B.; Akil, M.

    2006-02-01

    We describe in this article a new multi-view auto-stereoscopic display system with a real time architecture to generate images of n different points of view of a 3D scene. This architecture generates all the different points of view with only one generation process, the different pictures are not generated independently but all at the same time. The architecture generates a frame buffer that contains all the voxels with their three dimensions and regenerates the different pictures on demand from this frame buffer. The need of memory is decreased because there is no redundant information in the buffer.

  8. LED projection architectures for stereoscopic and multiview 3D displays

    NASA Astrophysics Data System (ADS)

    Meuret, Youri; Bogaert, Lawrence; Roelandt, Stijn; Vanderheijden, Jana; Avci, Aykut; De Smet, Herbert; Thienpont, Hugo

    2010-04-01

    LED-based projection systems have several interesting features: extended color-gamut, long lifetime, robustness and a fast turn-on time. However, the possibility to develop compact projectors remains the most important driving force to investigate LED projection. This is related to the limited light output of LED projectors that is a consequence of the relative low luminance of LEDs, compared to high intensity discharge lamps. We have investigated several LED projection architectures for the development of new 3D visualization displays. Polarization-based stereoscopic projection displays are often implemented using two identical projectors with passive polarizers at the output of their projection lens. We have designed and built a prototype of a stereoscopic projection system that incorporates the functionality of both projectors. The system uses high-resolution liquidcrystal- on-silicon light valves and an illumination system with LEDs. The possibility to add an extra LED illumination channel was also investigated for this optical configuration. Multiview projection displays allow the visualization of 3D images for multiple viewers without the need to wear special eyeglasses. Systems with large number of viewing zones have already been demonstrated. Such systems often use multiple projection engines. We have investigated a projection architecture that uses only one digital micromirror device and a LED-based illumination system to create multiple viewing zones. The system is based on the time-sequential modulation of the different images for each viewing zone and a special projection screen with micro-optical features. We analyzed the limitations of a LED-based illumination for the investigated stereoscopic and multiview projection systems and discuss the potential of a laser-based illumination.

  9. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  10. Autostereoscopic 3D flat panel display using an LCD-pixel-associated parallax barrier

    NASA Astrophysics Data System (ADS)

    Chen, En-guo; Guo, Tai-liang

    2014-05-01

    This letter reports an autostereoscopic three-dimensional (3D) flat panel display system employing a newly designed LCD-pixel-associated parallax barrier (LPB). The barrier's parameters can be conveniently determined by the LCD pixels and can help to greatly simplify the conventional design. The optical system of the proposed 3D display is built and simulated to verify the design. For further experimental demonstration, a 508-mm autostereoscopic 3D display prototype is developed and it presents good stereoscopic images. Experimental results agree well with the simulation, which reveals a strong potential for 3D display applications.

  11. Visual discomfort caused by color asymmetry in 3D displays

    NASA Astrophysics Data System (ADS)

    Chen, Zaiqing; Huang, Xiaoqiao; Tai, Yonghan; Shi, Junsheng; Yun, Lijun

    2016-10-01

    Color asymmetry is a common phenomenon in 3D displays, which can cause serious visual discomfort. To ensure safe and comfortable stereo viewing, the color difference between the left and right eyes should not exceed a threshold value, named comfortable color difference limit (CCDL). In this paper, we have experimentally measured the CCDL for five sample color points which were selected from the 1976 CIE u'v' chromaticity diagram. By human observers viewing brief presentations of color asymmetry image pairs, a psychophysical experiment is conducted. As the color asymmetry image pairs, left and right circular patches are horizontally adjusted on image pixels with five levels of disparities: 0, ±60, ±120 arc minutes, along six color directions. The experimental results showed that CCDLs for each sample point varied with the level of disparity and color direction. The minimum of CCDL is 0.019Δu' v' , and the maximum of CCDL is 0.133 Δu' v'. The database collected in this study might help 3D system design and 3D content creation.

  12. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  13. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  14. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  15. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  16. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  17. 3D head mount display with single panel

    NASA Astrophysics Data System (ADS)

    Wang, Yuchang; Huang, Junejei

    2014-09-01

    The head mount display for entertainment usually requires light weight. But in the professional application has more requirements. The image quality, field of view (FOV), color gamut, response and life time are considered items, too. A head mount display based on 1-chip TI DMD spatial light modulator is proposed. The multiple light sources and splitting images relay system are the major design tasks. The relay system images the object (DMD) into two image planes to crate binocular vision. The 0.65 inch 1080P DMD is adopted. The relay has a good performance which includes the doublet to reduce the chromatic aberration. Some spaces are reserved for placing the mirror and adjustable mechanism. The mirror splits the rays to the left and right image plane. These planes correspond to the eyepieces objects and image to eyes. A changeable mechanism provides the variable interpupillary distance (IPD). The folding optical path makes sure that the HMD center of gravity is close to the head and prevents the uncomfortable downward force being applied to head or orbit. Two RGB LED assemblies illuminate to the DMD in different angle. The light is highly collimated. The divergence angle is small enough such that one LED ray would only enters to the correct eyepiece. This switching is electronic controlled. There is no moving part to produce vibration and fast switch would be possible. Two LED synchronize with 3D video sync by a driving board which also controls the DMD. When the left eye image is displayed on DMD, the LED for left optical path turns on. Vice versa for right image and 3D scene is accomplished.

  18. High-definition 3D display for training applications

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy

    2010-04-01

    In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.

  19. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  20. Optical characterization of different types of 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    All 3D displays have the same intrinsic method to induce depth perception. They provide different images in the left and right eye of the observer to obtain the stereoscopic effect. The three most common solutions already available on the market are active glass, passive glass and auto-stereoscopic 3D displays. The three types of displays are based on different physical principle (polarization, time selection or spatial emission) and consequently require different measurement instruments and techniques. In the proposed paper, we present some of these solutions and the technical characteristics that can be obtained to compare the displays. We show in particular that local and global measurements can be made in the three cases to access to different characteristics. We also discuss the new technologies currently under development and their needs in terms of optical characterization.

  1. Application of a 3D volumetric display for radiation therapy treatment planning I: quality assurance procedures.

    PubMed

    Gong, Xing; Kirk, Michael Collins; Napoli, Josh; Stutsman, Sandy; Zusag, Tom; Khelashvili, Gocha; Chu, James

    2009-07-17

    To design and implement a set of quality assurance tests for an innovative 3D volumetric display for radiation treatment planning applications. A genuine 3D display (Perspecta Spatial 3D, Actuality-Systems Inc., Bedford, MA) has been integrated with the Pinnacle TPS (Philips Medical Systems, Madison WI), for treatment planning. The Perspecta 3D display renders a 25 cm diameter volume that is viewable from any side, floating within a translucent dome. In addition to displaying all 3D data exported from Pinnacle, the system provides a 3D mouse to define beam angles and apertures and to measure distance. The focus of this work is the design and implementation of a quality assurance program for 3D displays and specific 3D planning issues as guided by AAPM Task Group Report 53. A series of acceptance and quality assurance tests have been designed to evaluate the accuracy of CT images, contours, beams, and dose distributions as displayed on Perspecta. Three-dimensional matrices, rulers and phantoms with known spatial dimensions were used to check Perspecta's absolute spatial accuracy. In addition, a system of tests was designed to confirm Perspecta's ability to import and display Pinnacle data consistently. CT scans of phantoms were used to confirm beam field size, divergence, and gantry and couch angular accuracy as displayed on Perspecta. Beam angles were verified through Cartesian coordinate system measurements and by CT scans of phantoms rotated at known angles. Beams designed on Perspecta were exported to Pinnacle and checked for accuracy. Dose at sampled points were checked for consistency with Pinnacle and agreed within 1% or 1 mm. All data exported from Pinnacle to Perspecta was displayed consistently. The 3D spatial display of images, contours, and dose distributions were consistent with Pinnacle display. When measured by the 3D ruler, the distances between any two points calculated using Perspecta agreed with Pinnacle within the measurement error.

  2. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema.

  3. 3D Display Calibration by Visual Pattern Analysis.

    PubMed

    Hwang, Hyoseok; Chang, Hyun Sung; Nam, Dongkyung; Kweon, In So

    2017-02-06

    Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2 s for computation; and is robust to noise, working well in the SNR regime as low as 6 dB.

  4. True 3D displays for avionics and mission crewstations

    NASA Astrophysics Data System (ADS)

    Sholler, Elizabeth A.; Meyer, Frederick M.; Lucente, Mark E.; Hopper, Darrel G.

    1997-07-01

    3D threat projection has been shown to decrease the human recognition time for events, especially for a jet fighter pilot or C4I sensor operator when the advantage of realization that a hostile threat condition exists is the basis of survival. Decreased threat recognition time improves the survival rate and results from more effective presentation techniques, including the visual cue of true 3D (T3D) display. The concept of 'font' describes the approach adopted here, but whereas a 2D font comprises pixel bitmaps, a T3D font herein comprises a set of hologram bitmaps. The T3D font bitmaps are pre-computed, stored, and retrieved as needed to build images comprising symbols and/or characters. Human performance improvement, hologram generation for a T3D symbol font, projection requirements, and potential hardware implementation schemes are described. The goal is to employ computer-generated holography to create T3D depictions of a dynamic threat environments using fieldable hardware.

  5. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  6. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  7. A novel time-multiplexed autostereoscopic multiview full resolution 3D display

    NASA Astrophysics Data System (ADS)

    Liou, Jian-Chiun; Chen, Fu-Hao

    2012-03-01

    Many people believe that in the future, autostereoscopic 3D displays will become a mainstream display type. Achievement of higher quality 3D images requires both higher panel resolution and more viewing zones. Consequently, the transmission bandwidth of the 3D display systems involves enormous amounts of data transfer. We propose and experimentally demonstrate a novel time-multiplexed autostereoscopic multi-view full resolution 3D display based on the lenticular lens array in association with the control of the active dynamic LED backlight. The lenticular lenses of the lens array optical system receive the light and deflect the light into each viewing zone in a time sequence. The crosstalk under different observation scanning angles is showed, including the cases of 4-views field scanning. The crosstalk of any view zones is about 5% respectively; the results are better than other 3D type.

  8. Monocular 3D see-through head-mounted display via complex amplitude modulation.

    PubMed

    Gao, Qiankun; Liu, Juan; Han, Jian; Li, Xin

    2016-07-25

    The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display.

  9. New approach on calculating multiview 3D crosstalk for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Jung, Sung-Min; Lee, Kyeong-Jin; Kang, Ji-Na; Lee, Seung-Chul; Lim, Kyoung-Moon

    2012-03-01

    In this study, we suggest a new concept of 3D crosstalk for auto-stereoscopic displays and obtain 3D crosstalk values of several multi-view systems based on the suggested definition. First, we measure the angular dependencies of the luminance for auto-stereoscopic displays under various test patterns corresponding to each view of a multi-view system and then calculate the 3D crosstalk based on our new definition with respect to the measured luminance profiles. Our new approach gives just a single 3D crosstalk value for single device without any ambiguity and shows similar order of values to the conventional stereoscopic displays. These results are compared with the conventional 3D crosstalk values of selected auto-stereoscopic displays such as 4-view and 9-view systems. From the result, we believe that this new approach is very useful for controlling 3D crosstalk values of the 3D displays manufacturing and benchmarking of the 3D performances among the various auto-stereoscopic displays.

  10. Monocular accommodation condition in 3D display types through geometrical optics

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  11. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  12. Long-range 3D display using a collimated multi-layer display.

    PubMed

    Park, Soon-Gi; Yamaguchi, Yuta; Nakamura, Junya; Lee, Byoungho; Takaki, Yasuhiro

    2016-10-03

    We propose a long-range three-dimensional (3D) display using a collimated optics with multi-plane configuration. By using a spherical screen and a collimating lens, users observe the collimated image on the spherical screen, which simulates an image plane located at optical infinity. By combining and modulating overlapped multi-plane images, the observed image is located at desired depth position within the volume of multiple planes. The feasibility of the system is demonstrated by an experimental system composed of a planar and a spherical screen with a collimating lens. In addition, accommodation properties of the proposed system are demonstrated according to the depth modulation method.

  13. Color Flat Panel Displays: 3D Autostereoscopic Brassboard and Field Sequential Illumination Technology.

    DTIC Science & Technology

    1997-06-01

    DTI has advanced autostereoscopic and field sequential color (FSC) illumination technologies for flat panel displays. Using a patented backlight...technology, DTI has developed prototype 3D flat panel color display that provides stereoscopic viewing without the need for special glasses or other... autostereoscopic viewing. Discussions of system architecture, critical component specifications and resultant display characteristics are provided. Also

  14. Controllable liquid crystal gratings for an adaptive 2D/3D auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Zhang, Y. A.; Jin, T.; He, L. C.; Chu, Z. H.; Guo, T. L.; Zhou, X. T.; Lin, Z. X.

    2017-02-01

    2D/3D switchable, viewpoint controllable and 2D/3D localizable auto-stereoscopic displays based on controllable liquid crystal gratings are proposed in this work. Using the dual-layer staggered structure on the top substrate and bottom substrate as driven electrodes within a liquid crystal cell, the ratio between transmitting region and shielding region can be selectively controlled by the corresponding driving circuit, which indicates that 2D/3D switch and 3D video sources with different disparity images can reveal in the same auto-stereoscopic display system. Furthermore, the controlled region in the liquid crystal gratings presents 3D model while other regions maintain 2D model in the same auto-stereoscopic display by the corresponding driving circuit. This work demonstrates that the controllable liquid crystal gratings have potential applications in the field of auto-stereoscopic display.

  15. Stereoscopic 3D display with color interlacing improves perceived depth.

    PubMed

    Kim, Joohwan; Johnson, Paul V; Banks, Martin S

    2014-12-29

    Temporal interlacing is a method for presenting stereoscopic 3D content whereby the two eyes' views are presented at different times and optical filtering selectively delivers the appropriate view to each eye. This approach is prone to distortions in perceived depth because the visual system can interpret the temporal delay between binocular views as spatial disparity. We propose a novel color-interlacing display protocol that reverses the order of binocular presentation for the green primary but maintains the order for the red and blue primaries: During the first sub-frame, the left eye sees the green component of the left-eye view and the right eye sees the red and blue components of the right-eye view, and vice versa during the second sub-frame. The proposed method distributes the luminance of each eye's view more evenly over time. Because disparity estimation is based primarily on luminance information, a more even distribution of luminance over time should reduce depth distortion. We conducted a psychophysical experiment to test these expectations and indeed found that less depth distortion occurs with color interlacing than temporal interlacing.

  16. New three-dimensional head-mounted display system, TMDU-S-3D system, for minimally invasive surgery application: procedures for gasless single-port radical nephrectomy.

    PubMed

    Kihara, Kazunori; Fujii, Yasuhisa; Masuda, Hitoshi; Saito, Kazutaka; Koga, Fumitaka; Matsuoka, Yoh; Numao, Noboru; Kojima, Kazuyuki

    2012-09-01

    We present an application of a new three-dimensional head-mounted display system that combines a high-definition three-dimensional organic electroluminescent head-mounted display with a high-definition three-dimensional endoscope to minimally invasive surgery, using gasless single-port radical nephrectomy procedures as a model. This system presents the surgeon with a higher quality of magnified three-dimensional imagery in front of the eyes regardless of head position, and simultaneously allows direct vision by moving the angle of sight downward. It is also significantly less expensive than the current robotic surgery system. While carrying out gasless single-port radical nephrectomy, the system provided the surgeon with excellent three-dimensional imagery of the operative field, direct vision of the outside and inside of the patient, and depth perception and tactile feedback through the devices. All four nephrectomies were safely completed within the operative time, blood loss was within usual limits and there were no complications. The display was light enough to comfortably be worn for a long operative time. Our experiences show that the three-dimensional head-mounted display system might facilitate maneuverability and safety in minimally invasive procedures, without prohibitive cost, and thus might mitigate the drawbacks of other three-dimensional vision systems. Because of the potential benefits that this system offers, it deserves further refinements of its role in various minimally invasive surgeries.

  17. Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array.

    PubMed

    Kim, Do-Hyeong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Jeong, Ji-Seong; Lee, Jae-Won; Kim, Kyung-Ah; Kim, Nam; Yoo, Kwan-Hee

    2013-12-01

    This paper proposes an open computer language (OpenCL) parallel processing method to generate the elemental image arrays (EIAs) for hexagonal lens array from a three-dimensional (3D) object such as a volume data. Hexagonal lens array has a higher fill factor compared to the rectangular lens array case; however, each pixel of an elemental image should be determined to belong to the single hexagonal lens. Therefore, generation for the entire EIA requires very large computations. The proposed method reduces processing time for the EIAs for a given hexagonal lens array. By using the proposed image space parallel processing (ISPP) method, it can enhance the processing speed that generates the 3D display of real-time interactive integral imaging for hexagonal lens array. In our experiment, we implemented the EIAs for hexagonal lens array in real-time and obtained a good processing time for a large of volume data for multiple cases of lens arrays.

  18. Integral imaging based 3D display of holographic data.

    PubMed

    Yöntem, Ali Özgür; Onural, Levent

    2012-10-22

    We propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on an integral imaging based display setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated data or digitally acquired optical data. The method shows the connection between a hologram (diffraction pattern) and an elemental image set of the same 3D object. We showed three examples, one of which is the digitally captured optical diffraction tomography data of an epithelium cell. We obtained optical reconstructions with our integral imaging display setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement.

  19. Fast-response liquid-crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Liu, Yifan; Ren, Hongwen; Xu, Su; Li, Yan; Wu, Shin-Tson

    2014-02-01

    Three-dimensional (3D) display has become an increasingly important technology trend for information display applications. Dozens of different 3D display solutions have been proposed. The autostereoscopic 3D display based on lenticular microlens array is a promising approach, and fast-switching microlens array enables this system to display both 3D and conventional 2D images. Here we report two different fast-response microlens array designs. The first one is a blue phase liquid crystal lens driven by the Pedot: PSS resistive film electrodes. This BPLC lens exhibits several attractive features, such as polarization insensitivity, fast response time, simple driving scheme, and relatively low driving voltage, as compared to other BPLC lens designs. The second lens design has a double-layered structure. The first layer is a polarization dependent polymer microlens array, and the second layer is a thin twisted-nematic (TN) liquid crystal cell. When the TN cell is switched on/off, the traversing light through the polymeric lens array is either focused or defocused, so that 2D/3D images are displayed correspondingly. This lens design has low driving voltage, fast response time, and simple driving scheme. Simulation and experiment demonstrate that the performance of both switchable lenses meet the requirement of 3D display system design.

  20. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  1. Stereopsis has the edge in 3-D displays

    NASA Astrophysics Data System (ADS)

    Piantanida, T. P.

    The results of studies conducted at SRI International to explore differences in image requirements for depth and form perception with 3-D displays are presented. Monocular and binocular stabilization of retinal images was used to separate form and depth perception and to eliminate the retinal disparity input to stereopsis. Results suggest that depth perception is dependent upon illumination edges in the retinal image that may be invisible to form perception, and that the perception of motion-in-depth may be inhibited by form perception, and may be influenced by subjective factors such as ocular dominance and learning.

  2. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen.

  3. Instrument for 3D characterization of autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Prévoteau, J.; Chalençon-Piotin, S.; Debons, D.; Lucas, L.; Remion, Y.

    2011-03-01

    We now have numerous autostereoscopic displays, and it is mandatory to characterize them because it will allow to optimize their performances and to make efficient comparison between them. Therefore we need standards so we have to be able to quantify the quality of the viewer's perception. The purpose of the present paper is twofold; we first present a new instrument of characterization of the 3D perception on a given autostereoscopic display; then we propose a new way to realize an experimental protocol allowing to get a full characterization. This instrument will allow us to compare efficiently the different autostereoscopic displays but it will also validate practically the adequacy between the shooting and rendering geometries. In this aim, we are going to match a perceived scene with the virtual scene. It is hardly possible to determine the scene perceived by a viewer placed in front of an autostereoscopic display. Indeed if it may be executable on the pop-out, it is impossible on the depth effect because the depth of the virtual scene is set behind the screen. Therefore, we will have to use an optical illusion based on the deflection of light by a mirror to know the position which the viewer perceives some points of the virtual scene on an autostereoscopic display.

  4. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention

  5. Lamina 3D display: projection-type depth-fused display using polarization-encoded depth information.

    PubMed

    Park, Soon-gi; Yoon, Sangcheol; Yeom, Jiwoon; Baek, Hogil; Min, Sung-Wook; Lee, Byoungho

    2014-10-20

    In order to realize three-dimensional (3D) displays, various multiplexing methods have been proposed to add the depth dimension to two-dimensional scenes. However, most of these methods have faced challenges such as the degradation of viewing qualities, the requirement of complicated equipment, and large amounts of data. In this paper, we further developed our previous concept, polarization distributed depth map, to propose the Lamina 3D display as a method for encoding and reconstructing depth information using the polarization status. By adopting projection optics to the depth encoding system, reconstructed 3D images can be scaled like images of 2D projection displays. 3D reconstruction characteristics of the polarization-encoded images are analyzed with simulation and experiment. The experimental system is also demonstrated to show feasibility of the proposed method.

  6. A 360-degree floating 3D display based on light field regeneration.

    PubMed

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-06

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

  7. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  8. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  9. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  10. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  11. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  12. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  13. 3D dynamic holographic display by modulating complex amplitude experimentally.

    PubMed

    Li, Xin; Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-09-09

    Complex amplitude modulation method is presented theoretically and performed experimentally for three-dimensional (3D) dynamic holographic display with reduced speckle using a single phase-only spatial light modulator. The determination of essential factors is discussed based on the basic principle and theory. The numerical simulations and optical experiments are performed, where the static and animated objects without refinement on the surfaces and without random initial phases are reconstructed successfully. The results indicate that this method can reduce the speckle in reconstructed images effectively; furthermore, it will not cause the internal structure in the reconstructed pixels. Since the complex amplitude modulation is based on the principle of phase-only hologram, it does not need the stringent alignment of pixels. This method can be used for high resolution imaging or measurement in various optical areas.

  14. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  15. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  16. Future of photorefractive based holographic 3D display

    NASA Astrophysics Data System (ADS)

    Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.

    2010-02-01

    The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.

  17. Comprehensive evaluation of latest 2D/3D monitors and comparison to a custom-built 3D mirror-based display in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Wilhelm, Dirk; Reiser, Silvano; Kohn, Nils; Witte, Michael; Leiner, Ulrich; Mühlbach, Lothar; Ruschin, Detlef; Reiner, Wolfgang; Feussner, Hubertus

    2014-03-01

    Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation.

  18. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  19. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  20. Dual side transparent OLED 3D display using Gabor super-lens

    NASA Astrophysics Data System (ADS)

    Chestak, Sergey; Kim, Dae-Sik; Cho, Sung-Woo

    2015-03-01

    We devised dual side transparent 3D display using transparent OLED panel and two lenticular arrays. The OLED panel is sandwiched between two parallel confocal lenticular arrays, forming Gabor super-lens. The display provides dual side stereoscopic 3D imaging and floating image of the object, placed behind it. The floating image can be superimposed with the displayed 3D image. The displayed autostereoscopic 3D images are composed of 4 views, each with resolution 64x90 pix.

  1. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  2. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  3. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  4. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  5. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  6. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.

  7. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  8. Principle and characteristics of 3D display based on random source constructive interference.

    PubMed

    Li, Zhiyang

    2014-07-14

    The paper discusses the principle and characteristics of 3D display based on random source constructive interference (RSCI). The voxels of discrete 3D images are formed in the air via constructive interference of spherical light waves emitted by point light sources (PLSs) that are arranged at random positions to depress high order diffraction. The PLSs might be created by two liquid crystal panels sandwiched between two micro-lens arrays. The point spread function of the system revealed that it is able to reconstruct voxels with diffraction limited resolution over a large field width and depth. The high resolution was confirmed by the experiments. Theoretical analyses also shows that the system could provide a 3D image contrast and gray levels no less than that in liquid crystal panels. Compared with 2D display, it needs only additional depth information, which brings only about 30% data increment.

  9. Assessment of eye fatigue caused by 3D displays based on multimodal measurements.

    PubMed

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-09-04

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively.

  10. See-through multi-view 3D display with parallax barrier

    NASA Astrophysics Data System (ADS)

    Hong, Jong-Young; Lee, Chang-Kun; Park, Soon-gi; Kim, Jonghyun; Cha, Kyung-Hoon; Kang, Ki Hyung; Lee, Byoungho

    2016-03-01

    In this paper, we propose the see-through parallax barrier type multi-view display with transparent liquid crystal display (LCD). The transparency of LCD is realized by detaching the backlight unit. The number of views in the proposed system is minimized to enlarge the aperture size of parallax barrier, which determines the transparency. For compensating the shortness of the number of viewpoints, eye tracking method is applied to provide large number of views and vertical parallax. Through experiments, a prototype of see-through autostereoscopic 3D display with parallax barrier is implemented, and the system parameters of transmittance, crosstalk, and barrier structure perception are analyzed.

  11. Evaluation of passive polarized stereoscopic 3D display for visual & mental fatigues.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Mumtaz, Wajid; Badruddin, Nasreen; Kamel, Nidal

    2015-01-01

    Visual and mental fatigues induced by active shutter stereoscopic 3D (S3D) display have been reported using event-related brain potentials (ERP). An important question, that is whether such effects (visual & mental fatigues) can be found in passive polarized S3D display, is answered here. Sixty-eight healthy participants are divided into 2D and S3D groups and subjected to an oddball paradigm after being exposed to S3D videos with passive polarized display or 2D display. The age and fluid intelligence ability of the participants are controlled between the groups. ERP results do not show any significant differences between S3D and 2D groups to find the aftereffects of S3D in terms of visual and mental fatigues. Hence, we conclude that passive polarized S3D display technology may not induce visual and/or mental fatigue which may increase the cognitive load and suppress the ERP components.

  12. Optical rotation compensation for a holographic 3D display with a 360 degree horizontal viewing zone.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2016-10-20

    A method for a continuous optical rotation compensation in a time-division-based holographic three-dimensional (3D) display with a rotating mirror is presented. Since the coordinate system of wavefronts after the mirror reflection rotates about the optical axis along with the rotation angle, compensation or cancellation is absolutely necessary to fix the reconstructed 3D object. In this study, we address this problem by introducing an optical image rotator based on a right-angle prism that rotates synchronously with the rotating mirror. The optical and continuous compensation reduces the occurrence of duplicate images, which leads to the improvement of the quality of reconstructed images. The effect of the optical rotation compensation is experimentally verified and a demonstration of holographic 3D display with the optical rotation compensation is presented.

  13. A new way to characterize autostereoscopic 3D displays using Fourier optics instrument

    NASA Astrophysics Data System (ADS)

    Boher, P.; Leroux, T.; Bignon, T.; Collomb-Patton, V.

    2009-02-01

    Auto-stereoscopic 3D displays offer presently the most attractive solution for entertainment and media consumption. Despite many studies devoted to this type of technology, efficient characterization methods are still missing. We present here an innovative optical method based on high angular resolution viewing angle measurements with Fourier optics instrument. This type of instrument allows measuring the full viewing angle aperture of the display very rapidly and accurately. The system used in the study presents a very high angular resolution below 0.04 degree which is mandatory for this type of characterization. We can predict from the luminance or color viewing angle measurements of the different views of the 3D display what will be seen by an observer at any position in front of the display. Quality criteria are derived both for 3D and standard properties at any observer position and Qualified Stereo Viewing Space (QSVS) is determined. The use of viewing angle measurements at different locations on the display surface during the observer computation gives more realistic estimation of QSVS and ensures its validity for the entire display surface. Optimum viewing position, viewing freedom, color shifts and standard parameters are also quantified. Simulation of the moire issues can be made leading to a better understanding of their origin.

  14. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  15. Polymeric-lens-embedded 2D/3D switchable display with dramatically reduced crosstalk.

    PubMed

    Zhu, Ruidong; Xu, Su; Hong, Qi; Wu, Shin-Tson; Lee, Chiayu; Yang, Chih-Ming; Lo, Chang-Cheng; Lien, Alan

    2014-03-01

    A two-dimensional/three-dimensional (2D/3D) display system is presented based on a twisted-nematic cell integrated polymeric microlens array. This device structure has the advantages of fast response time and low operation voltage. The crosstalk of the system is analyzed in detail and two approaches are proposed to reduce the crosstalk: a double lens system and the prism approach. Illuminance distribution analysis proves these two approaches can dramatically reduce crosstalk, thus improving image quality.

  16. 3-D Extensions for Trustworthy Systems

    DTIC Science & Technology

    2011-01-01

    3- D Extensions for Trustworthy Systems (Invited Paper) Ted Huffmire∗, Timothy Levin∗, Cynthia Irvine∗, Ryan Kastner† and Timothy Sherwood...address these problems, we propose an approach to trustworthy system development based on 3- D integration, an emerging chip fabrication technique in...which two or more integrated circuit dies are fabricated individually and then combined into a single stack using vertical conductive posts. With 3- D

  17. 3D brain MR angiography displayed by a multi-autostereoscopic screen

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Ribeiro, Fádua H.; Lima, Fabrício O.; Serra, Rolando L.; Moreno, Alfredo B.; Li, Li M.

    2012-02-01

    The magnetic resonance angiography (MRA) can be used to examine blood vessels in key areas of the body, including the brain. In the MRA, a powerful magnetic field, radio waves and a computer produce the detailed images. Physicians use the procedure in brain images mainly to detect atherosclerosis disease in the carotid artery of the neck, which may limit blood flow to the brain and cause a stroke and identify a small aneurysm or arteriovenous malformation inside the brain. Multi-autostereoscopic displays provide multiple views of the same scene, rather than just two, as in autostereoscopic systems. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The use of 3D imaging in the medical field has proven to be a benefit to doctors when diagnosing patients. For different medical domains a stereoscopic display could be advantageous in terms of a better spatial understanding of anatomical structures, better perception of ambiguous anatomical structures, better performance of tasks that require high level of dexterity, increased learning performance, and improved communication with patients or between doctors. In this work we describe a multi-autostereoscopic system and how to produce 3D MRA images to be displayed with it. We show results of brain MR angiography images discussing, how a 3D visualization can help physicians to a better diagnosis.

  18. A 3D integral imaging optical see-through head-mounted display.

    PubMed

    Hua, Hong; Javidi, Bahram

    2014-06-02

    An optical see-through head-mounted display (OST-HMD), which enables optical superposition of digital information onto the direct view of the physical world and maintains see-through vision to the real world, is a vital component in an augmented reality (AR) system. A key limitation of the state-of-the-art OST-HMD technology is the well-known accommodation-convergence mismatch problem caused by the fact that the image source in most of the existing AR displays is a 2D flat surface located at a fixed distance from the eye. In this paper, we present an innovative approach to OST-HMD designs by combining the recent advancement of freeform optical technology and microscopic integral imaging (micro-InI) method. A micro-InI unit creates a 3D image source for HMD viewing optics, instead of a typical 2D display surface, by reconstructing a miniature 3D scene from a large number of perspective images of the scene. By taking advantage of the emerging freeform optical technology, our approach will result in compact, lightweight, goggle-style AR display that is potentially less vulnerable to the accommodation-convergence discrepancy problem and visual fatigue. A proof-of-concept prototype system is demonstrated, which offers a goggle-like compact form factor, non-obstructive see-through field of view, and true 3D virtual display.

  19. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  20. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  1. Evaluation of stereoscopic 3D displays for image analysis tasks

    NASA Astrophysics Data System (ADS)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  2. Focus-tunable multi-view holographic 3D display using a 4k LCD panel

    NASA Astrophysics Data System (ADS)

    Lin, Qiaojuan; Sang, Xinzhu; Chen, Zhidong; Yan, Binbin; Yu, Chongxiu; Wang, Peng; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    A focus-tunable multi-view holographic three-dimensional (3D) display system with a 10.1 inch 4K liquid crystal device (LCD) panel is presented. In the proposed synthesizing method, computer-generated hologram (CGH) does not require calculations of light diffraction. When multiple rays pass through one point of a 3D image and enter the pupil simultaneously, the eyes can focus on the point according to the depth cue. Benefiting from the holograms, the dense multiple perspective viewpoints of the 3D object are recorded and combined into the CGH in a dense-super-view way, which make two or more rays emitted from the same point in reconstructed light field into the pupil simultaneously. In general, a wavefront is converged to a viewpoint with the amplitude distribution of multi-view images on the hologram plane, and the phase distribution of a spherical wave is converged to the viewpoint. Here, the wavefronts are calculated according to all the multi-view images and then they are summed up to obtain the object wave on the hologram plane. Moreover, the reference light (converging light) is adopted to converge the central diffraction wave from the liquid crystal display (LCD) into a common area in a short view distance. Experimental results shows that the proposed holographic display can regenerate the 3D objects with focus cues: accommodation and retinal blur.

  3. 3D display for enhanced tele-operation and other applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-04-01

    In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  4. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor

    NASA Astrophysics Data System (ADS)

    Taherkhani, Reza; Kia, Mohammad

    2012-09-01

    This paper describes the design and building of a low cost and practical stereoscopic display that does not need to wear special glasses, and uses eye tracking to give a large degree of freedom to viewer (or viewer's) movement while displaying the minimum amount of information. The parallax barrier technique is employed to turn a LCD into an auto-stereoscopic display. The stereo image pair is screened on the usual liquid crystal display simultaneously but in different columns of pixels. Controlling of the display in red-green-blue sub pixels increases the accuracy of light projecting direction to less than 2 degrees without losing too much LCD's resolution and an eye-tracking system determines the correct angle to project the images along the viewer's eye pupils and an image processing system puts the 3D images data in correct R-G-B sub pixels. 1.6 degree of light direction controlling achieved in practice. The 3D monitor is just made by applying some simple optical materials on a usual LCD display with normal resolution. [Figure not available: see fulltext.

  5. Special subpixel arrangement-based 3D display with high horizontal resolution.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wu, Fei

    2014-11-01

    A special subpixel arrangement-based 3D display is proposed. This display consists of a 2D display panel and a parallax barrier. On the 2D display panel, subpixels have a special arrangement, so they can redefine the formation of color pixels. This subpixel arrangement can bring about triple horizontal resolution for a conventional 2D display panel. Therefore, when these pixels are modulated by the parallax barrier, the 3D images formed also have triple horizontal resolution. A prototype of this display is developed. Experimental results show that this display with triple horizontal resolution can produce a better display effect than the conventional one.

  6. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  7. Full-color 3D display using binary phase modulation and speckle reduction

    NASA Astrophysics Data System (ADS)

    Matoba, Osamu; Masuda, Kazunobu; Harada, Syo; Nitta, Kouichi

    2016-06-01

    One of the 3D display systems for full-color reconstruction by using binary phase modulation is presented. The improvement of reconstructed objects is achieved by optimizing the binary phase modulation and accumulating the speckle patterns by changing the random phase distributions. The binary phase pattern is optimized by the modified Frenel ping-pong algorithm. Numerical and experimental demonstrations of full color reconstruction are presented.

  8. Multiplexing encoding method for full-color dynamic 3D holographic display.

    PubMed

    Xue, Gaolei; Liu, Juan; Li, Xin; Jia, Jia; Zhang, Zhao; Hu, Bin; Wang, Yongtian

    2014-07-28

    The multiplexing encoding method is proposed and demonstrated for reconstructing colorful images accurately by using single phase-only spatial light modulator (SLM). It will encode the light waves at different wavelengths into one pure-phase hologram at the same time based on the analytic formulas. The three-dimensional (3D) images can be reconstructed clearly when the light waves at different wavelengths are incident into the encoding hologram. Numerical simulations and optical experiments for 2D and 3D colorful images are performed. The results show that the colorful reconstructed images with high quality are achieved successfully. The proposed multiplexing method is a simple and fast encoding approach and the size of the system is small and compact. It is expected to be used for realizing full-color 3D holographic display in future.

  9. Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements

    PubMed Central

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-01-01

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315

  10. 3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC).

    PubMed

    Navarro, H; Martínez-Cuenca, R; Saavedra, G; Martínez-Corral, M; Javidi, B

    2010-12-06

    Previously, we reported a digital technique for formation of real, non-distorted, orthoscopic integral images by direct pickup. However the technique was constrained to the case of symmetric image capture and display systems. Here, we report a more general algorithm which allows the pseudoscopic to orthoscopic transformation with full control over the display parameters so that one can generate a set of synthetic elemental images that suits the characteristics of the Integral-Imaging monitor and permits control over the depth and size of the reconstructed 3D scene.

  11. Technical solutions for a full-resolution autostereoscopic 2D/3D display technology

    NASA Astrophysics Data System (ADS)

    Stolle, Hagen; Olaya, Jean-Christophe; Buschbeck, Steffen; Sahm, Hagen; Schwerdtner, Armin

    2008-02-01

    Auto-stereoscopic 3D displays capable of high quality, full-resolution images for multiple users can only be created with time-sequential systems incorporating eye tracking and a dedicated optical design. The availability of high speed displays with 120Hz and faster eliminated one of the major hurdles for commercial solutions. Results of alternative display solutions from SeeReal show the impact of optical design on system performance and product features. Depending on the manufacturer's capabilities, system complexity can be shifted from optics to SLM with an impact on viewing angle, number of users and energy efficiency, but also on manufacturing processes. A proprietary solution for eye tracking from SeeReal demonstrates that the required key features can be achieved and implemented in commercial systems in a reasonably short time.

  12. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed.

  13. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  14. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  15. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  16. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  17. Depth-of-Focus Affects 3D Perception in Stereoscopic Displays.

    PubMed

    Vienne, Cyril; Blondé, Laurent; Mamassian, Pascal

    2015-01-01

    Stereoscopic systems present binocular images on planar surface at a fixed distance. They induce cues to flatness, indicating that images are presented on a unique surface and specifying the relative depth of that surface. The center of interest of this study is on a second problem, arising when a 3D object distance differs from the display distance. As binocular disparity must be scaled using an estimate of viewing distance, object depth can thus be affected through disparity scaling. Two previous experiments revealed that stereoscopic displays can affect depth perception due to conflicting accommodation and vergence cues at near distances. In this study, depth perception is evaluated for farther accommodation and vergence distances using a commercially available 3D TV. In Experiment I, we evaluated depth perception of 3D stimuli at different vergence distances for a large pool of participants. We observed a strong effect of vergence distance that was bigger for younger than for older participants, suggesting that the effect of accommodation was reduced in participants with emerging presbyopia. In Experiment 2, we extended 3D estimations by varying both the accommodation and vergence distances. We also tested the hypothesis that setting accommodation open loop by constricting pupil size could decrease the contribution of focus cues to perceived distance. We found that the depth constancy was affected by accommodation and vergence distances and that the accommodation distance effect was reduced with a larger depth-of-focus. We discuss these results with regard to the effectiveness of focus cues as a distance signal. Overall, these results highlight the importance of appropriate focus cues in stereoscopic displays at intermediate viewing distances.

  18. Comparative Analysis of Virtual 3-D Visual Display Systems Contributions to Cross-Functional Team Collaboration in a Product Design Review Environment

    DTIC Science & Technology

    1998-01-01

    76 subject interacted with the VE. These researchers also have explored application of neurolinguistic programming to quantify the sense of presence...model that a user forms of how a computer system or program works. It can be conceived as the users’ understanding of the relationships of between the...revise the program strategy. 36 Another important objective of design reviews is to integrate knowledge from various functional areas into one common

  19. Investigation of a 3D head-mounted projection display using retro-reflective screen.

    PubMed

    Héricz, Dalma; Sarkadi, Tamás; Lucza, Viktor; Kovács, Viktor; Koppa, Pál

    2014-07-28

    We propose a compact head-worn 3D display which provides glasses-free full motion parallax. Two picoprojectors placed on the viewer's head project images on a retro-reflective screen that reflects left and right images to the appropriate eyes of the viewer. The properties of different retro-reflective screen materials have been investigated, and the key parameters of the projection - brightness and cross-talk - have been calculated. A demonstration system comprising two projectors, a screen tracking system and a commercial retro-reflective screen has been developed to test the visual quality of the proposed approach.

  20. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  1. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  2. 3D Navigation and Integrated Hazard Display in Advanced Avionics: Workload, Performance, and Situation Awareness

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Alexander, Amy L.

    2004-01-01

    We examined the ability for pilots to estimate traffic location in an Integrated Hazard Display, and how such estimations should be measured. Twelve pilots viewed static images of traffic scenarios and then estimated the outside world locations of queried traffic represented in one of three display types (2D coplanar, 3D exocentric, and split-screen) and in one of four conditions (display present/blank crossed with outside world present/blank). Overall, the 2D coplanar display best supported both vertical (compared to 3D) and lateral (compared to split-screen) traffic position estimation performance. Costs of the 3D display were associated with perceptual ambiguity. Costs of the split screen display were inferred to result from inappropriate attention allocation. Furthermore, although pilots were faster in estimating traffic locations when relying on memory, accuracy was greatest when the display was available.

  3. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  4. Design of extended viewing zone at autostereoscopic 3D display based on diffusing optical element

    NASA Astrophysics Data System (ADS)

    Kim, Min Chang; Hwang, Yong Seok; Hong, Suk-Pyo; Kim, Eun Soo

    2012-03-01

    In this paper, to realize a non-glasses type 3D display as next step from the current glasses-typed 3D display, it is suggested that a viewing zone is designed for the 3D display using DOE (Diffusing Optical Element). Viewing zone of proposed method is larger than that of the current parallax barrier method or lenticular method. Through proposed method, it is shown to enable the expansion and adjustment of the area of viewing zone according to viewing distance.

  5. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  6. A guide for human factors research with stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Pinkus, Alan R.

    2015-05-01

    In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.

  7. Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images.

    PubMed

    Otsuka, Rieko; Hoshino, Takeshi; Horry, Youichi

    2006-01-01

    Three-dimensional displays are drawing attention as next-generation devices. Some techniques which can reproduce three-dimensional images prepared in advance have already been developed. However, technology for the transmission of 3D moving pictures in real-time is yet to be achieved. In this paper, we present a novel method for 360-degrees viewable 3D displays and the Transpost system in which we implement the method. The basic concept of our system is to project multiple images of the object, taken from different angles, onto a spinning screen. The key to the method is projection of the images onto a directionally reflective screen with a limited viewing angle. The images are reconstructed to give the viewer a three-dimensional image of the object displayed on the screen. The display system can present images of computer-graphics pictures, live pictures, and movies. Furthermore, the reverse optical process of that in the display system can be used to record images of the subject from multiple directions. The images can then be transmitted to the display in real-time. We have developed prototypes of a 3D display and a 3D human-image transmission system. Our preliminary working prototypes demonstrate new possibilities of expression and forms of communication.

  8. Compact multi-projection 3D display using a wedge prism

    NASA Astrophysics Data System (ADS)

    Park, Soon-gi; Lee, Chang-Kun; Lee, Byoungho

    2015-03-01

    We propose a compact multi-projection system based on integral floating method with waveguide projection. Waveguide projection can reduce the projection distance by multiple folding of optical path inside the waveguide. The proposed system is composed of a wedge prism, which is used as a waveguide, multiple projection-units, and an anisotropic screen made of floating lens combined with a vertical diffuser. As the projected image propagates through the wedge prism, it is reflected at the surfaces of prism by total internal reflections, and the final view image is created by the floating lens at the viewpoints. The position of view point is decided by the lens equation, and the interval of view point is calculated by the magnification of collimating lens and interval of projection-units. We believe that the proposed method can be useful for implementing a large-scale autostereoscopic 3D system with high quality of 3D images using projection optics. In addition, the reduced volume of the system will alleviate the restriction of installment condition, and will widen the applications of a multi-projection 3D display.

  9. Generation of flat viewing zone in DFVZ autostereoscopic multiview 3D display by weighting factor

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ky-Hyuk

    2013-05-01

    A new method is introduced to reduce three crosstalk problems and the brightness variation in 3D image by means of the dynamic fusion of viewing zones (DFVZ) using weighting factor. The new method effectively generates the flat viewing zone at the center of viewing zone. The new type autostereoscopic 3D display can give less brightness variation of 3D image when observer moves.

  10. 3-D display and transmission technologies for telemedicine applications: a review.

    PubMed

    Liu, Qiang; Sclabassi, Robert J; Favalora, Gregg E; Sun, Mingui

    2008-03-01

    Three-dimensional (3-D) visualization technologies have been widely commercialized. These technologies have great potential in a number of telemedicine applications, such as teleconsultation, telesurgery, and remote patient monitoring. This work presents an overview of the state-of-the-art 3-D display devices and related 3-D image/video transmission technologies with the goal of enhancing their utilization in medical applications.

  11. Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher.

    PubMed

    Wang, Qiong-Hua; Ji, Chao-Chao; Li, Lei; Deng, Huan

    2016-01-11

    In this paper, a dual-view integral imaging three-dimensional (3D) display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array is proposed. Two elemental image arrays for two different 3D images are presented by the display panel alternately, and the polarization switcher controls the polarization direction of the light rays synchronously. The two elemental image arrays are modulated by their corresponding and neighboring micro-lenses of the micro-lens array, and reconstruct two different 3D images in viewing zones 1 and 2, respectively. A prototype of the dual-view II 3D display is developed, and it has good performances.

  12. Optimal 3D Viewing with Adaptive Stereo Displays for Advanced Telemanipulation

    NASA Technical Reports Server (NTRS)

    Lee, S.; Lakshmanan, S.; Ro, S.; Park, J.; Lee, C.

    1996-01-01

    A method of optimal 3D viewing based on adaptive displays of stereo images is presented for advanced telemanipulation. The method provides the viewer with the capability of accurately observing a virtual 3D object or local scene of his/her choice with minimum distortion.

  13. Comparison of 2D and 3D Displays and Sensor Fusion for Threat Detection, Surveillance, and Telepresence

    DTIC Science & Technology

    2003-05-19

    Comparison of 2D and 3D displays and sensor fusion for threat detection, surveillance, and telepresence T. Meitzler, Ph. D.a, D. Bednarz, Ph.D.a, K...camouflaged threats are compared on a two dimensional (2D) display and a three dimensional ( 3D ) display. A 3D display is compared alongside a 2D...technologies that take advantage of 3D and sensor fusion will be discussed. 1. INTRODUCTION Computer driven interactive 3D imaging has made

  14. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  15. Front and rear projection autostereoscopic 3D displays based on lenticular sheets

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Zang, Shang-Fei; Qi, Lin

    2015-03-01

    A front projection autostereoscopic display is proposed. The display is composed of eight projectors and a 3D-imageguided screen which having a lenticular sheet and a retro-reflective diffusion screen. Based on the optical multiplexing and de-multiplexing, the optical functions of the 3D-image-guided screen are parallax image interlacing and viewseparating, which is capable of reconstructing 3D images without quality degradation from the front direction. The operating principle, optical design calculation equations and correction method of parallax images are given. A prototype of the front projection autostereoscopic display is developed, which enhances the brightness and 3D perceptions, and improves space efficiency. The performance of this prototype is evaluated by measuring the luminance and crosstalk distribution along the horizontal direction at the optimum viewing distance. We also propose a rear projection autostereoscopic display. The display consists of eight projectors, a projection screen, and two lenticular sheets. The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography. A prototype of the rear projection autostereoscopic display is developed. The normalized luminance distributions of viewing zones from the measurement are given. Results agree well with the designed values. The prototype presents high resolution and high brightness 3D images. The research has potential applications in some commercial entertainments and movies for the realistic 3D perceptions.

  16. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  17. 3D imaging system for biometric applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Abramovich, Gil; Paruchura, Vijay; Manickam, Swaminathan; Vemury, Arun

    2010-04-01

    There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.

  18. Recent research results in stereo 3-D pictorial displays at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.

    1990-01-01

    Recent results from a NASA-Langley program which addressed stereo 3D pictorial displays from a comprehensive standpoint are reviewed. The program dealt with human factors issues and display technology aspects, as well as flight display applications. The human factors findings include addressing a fundamental issue challenging the application of stereoscopic displays in head-down flight applications, with the determination that stereoacuity is unaffected by the short-term use of stereo 3D displays. While stereoacuity has been a traditional measurement of depth perception abilities, it is a measure of relative depth, rather than actual depth (absolute depth). Therefore, depth perception effects based on size and distance judgments and long-term stereo exposure remain issues to be investigated. The applications of stereo 3D to pictorial flight displays within the program have repeatedly demonstrated increases in pilot situational awareness and task performance improvements. Moreover, these improvements have been obtained within the constraints of the limited viewing volume available with conventional stereo displays. A number of stereo 3D pictorial display applications are described, including recovery from flight-path offset, helicopter hover, and emulated helmet-mounted display.

  19. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  20. Multivalent 3D Display of Glycopolymer Chains for Enhanced Lectin Interaction.

    PubMed

    Lin, Kenneth; Kasko, Andrea M

    2015-08-19

    Synthetic glycoprotein conjugates were synthesized through the polymerization of glycomonomers (mannose and/or galactose acrylate) directly from a protein macroinitiator. This design combines the multivalency of polymer structures with 3D display of saccharides randomly arranged around a central protein structure. The conjugates were tested for their interaction with mannose binding lectin (MBL), a key protein of immune complement. Increasing mannose number (controlled through polymer chain length) and density (controlled through comonomer feed ratio of mannose versus galactose) result in greater interaction with MBL. Most significantly, mannose glycopolymers displayed in a multivalent and 3D configuration from the protein exhibit dramatically enhanced interaction with MBL compared to linear glycopolymer chains with similar total valency but lacking 3D display. These findings demonstrate the importance of the 3D presentation of ligand structures for designing biomimetic materials.

  1. 3D Multifunctional Ablative Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken

    2015-01-01

    NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.

  2. The optimizations of CGH generation algorithms based on multiple GPUs for 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Yang, Dan; Liu, Juan; Zhang, Yingxi; Li, Xin; Wang, Yongtian

    2016-10-01

    Holographic display has been considered as a promising display technology. Currently, low-speed generation of holograms with big holographic data is one of crucial bottlenecks for three dimensional (3D) dynamic holographic display. To solve this problem, the acceleration method computation platform is presented based on look-up table point source method. The computer generated holograms (CGHs) acquisition is sped up by offline file loading and inline calculation optimization, where a pure phase CGH with gigabyte data is encoded to record an object with 10 MB sampling data. Both numerical simulation and optical experiment demonstrate that the CGHs with 1920×1080 resolution by the proposed method can be applied to the 3D objects reconstruction with high quality successfully. It is believed that the CGHs with huge data can be generated by the proposed method with high speed for 3D dynamic holographic display in near future.

  3. 3D measurement system based on computer-generated gratings

    NASA Astrophysics Data System (ADS)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  4. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  5. Standardization based on human factors for 3D display: performance characteristics and measurement methods

    NASA Astrophysics Data System (ADS)

    Uehara, Shin-ichi; Ujike, Hiroyasu; Hamagishi, Goro; Taira, Kazuki; Koike, Takafumi; Kato, Chiaki; Nomura, Toshio; Horikoshi, Tsutomu; Mashitani, Ken; Yuuki, Akimasa; Izumi, Kuniaki; Hisatake, Yuzo; Watanabe, Naoko; Umezu, Naoaki; Nakano, Yoshihiko

    2010-02-01

    We are engaged in international standardization activities for 3D displays. We consider that for a sound development of 3D displays' market, the standards should be based on not only mechanism of 3D displays, but also human factors for stereopsis. However, we think that there is no common understanding on what the 3D display should be and that the situation makes developing the standards difficult. In this paper, to understand the mechanism and human factors, we focus on a double image, which occurs in some conditions on an autostereoscopic display. Although the double image is generally considered as an unwanted effect, we consider that whether the double image is unwanted or not depends on the situation and that there are some allowable double images. We tried to classify the double images into the unwanted and the allowable in terms of the display mechanism and visual ergonomics for stereopsis. The issues associated with the double image are closely related to performance characteristics for the autostereoscopic display. We also propose performance characteristics, measurement and analysis methods to represent interocular crosstalk and motion parallax.

  6. Synthesis and display of dynamic holographic 3D scenes with real-world objects.

    PubMed

    Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea; Näsänen, Risto; Naughton, Thomas J; Ferraro, Pietro

    2010-04-26

    A 3D scene is synthesized combining multiple optically recorded digital holograms of different objects. The novel idea consists of compositing moving 3D objects in a dynamic 3D scene using a process that is analogous to stop-motion video. However in this case the movie has the exciting attribute that it can be displayed and observed in 3D. We show that 3D dynamic scenes can be projected as an alternative to complicated and heavy computations needed to generate realistic-looking computer generated holograms. The key tool for creating the dynamic action is based on a new concept that consists of a spatial, adaptive transformation of digital holograms of real-world objects allowing full control in the manipulation of the object's position and size in a 3D volume with very high depth-of-focus. A pilot experiment to evaluate how viewers perceive depth in a conventional single-view display of the dynamic 3D scene has been performed.

  7. A time-sequential autostereoscopic 3D display using a vertical line dithering for utilizing the side lobes

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jin; Park, Minyoung

    2014-11-01

    In spite of the developments of various autostereoscopic three-dimensional (3D) technologies, the inferior resolution of the realized 3D image is a severe problem that should be resolved. For that purpose, a time-sequential 3D display is developed to provide 3D images with higher resolution and attracts much attention. Among them, a method using a directional backlight unit (DBLU) is an effective way to be adopted in liquid crystal display (LCD) with higher frame rate such as 120Hz. However, in the conventional time-sequential system, the insufficient frame rate results a flicker problem which means a recognizable fluctuation of image brightness. A dot dithering method can be a good solution for reducing that problem but it was impossible to observe the 3D image in side lobes because the image data and the directivity of light rays from the DBLU do not match in side lobes. In this paper, we propose a new vertical line dithering method to expand the area for 3D image observation by utilizing the side lobes. Since the side lobes locate in the left and right position of the center lobe, it is needed to arrange the image data in LCD panel and directivity of the light rays from the DBLU to have continuity in horizontal direction. Although the observed 3D images in side lobes are flipped ones, the utilization of the side lobes can increase the number of observers in horizontal direction.

  8. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  9. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  10. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  11. Characteristics measurement methodology of the large-size autostereoscopic 3D LED display

    NASA Astrophysics Data System (ADS)

    An, Pengli; Su, Ping; Zhang, Changjie; Cao, Cong; Ma, Jianshe; Cao, Liangcai; Jin, Guofan

    2014-11-01

    Large-size autostereoscopic 3D LED displays are commonly used in outdoor or large indoor space, and have the properties of long viewing distance and relatively low light intensity at the viewing distance. The instruments used to measure the characteristics (crosstalk, inconsistency, chromatic dispersion, etc.) of the displays should have long working distance and high sensitivity. In this paper, we propose a methodology for characteristics measurement based on a distribution photometer with a working distance of 5.76m and the illumination sensitivity of 0.001 mlx. A display panel holder is fabricated and attached on the turning stage of the distribution photometer. Specific test images are loaded on the display separately, and the luminance data at the distance of 5.76m to the panel are measured. Then the data are transformed into the light intensity at the optimum viewing distance. According to definitions of the characteristics of the 3D displays, the crosstalk, inconsistency, chromatic dispersion could be calculated. The test results and analysis of the characteristics of an autostereoscopic 3D LED display are proposed.

  12. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  13. Tunable nonuniform sampling method for fast calculation and intensity modulation in 3D dynamic holographic display.

    PubMed

    Zhang, Zhao; Liu, Juan; Jia, Jia; Li, Xin; Han, Jian; Hu, Bin; Wang, Yongtian

    2013-08-01

    Heavy computational load of computer-generated hologram (CGH) and imprecise intensity modulation of 3D images are crucial problems in dynamic holographic display. The nonuniform sampling method is proposed to speed up CGH generation and precisely modulate the reconstructed intensities of phase-only CGH. The proposed method can eliminate the redundant information properly, where 70% reduction in the storage amount can be reached when it is combined with the novel lookup table method. Multigrayscale modulation of reconstructed 3D images can be achieved successfully. Numerical simulations and optical experiments are performed, and both are in good agreement. It is believed that the proposed method can be used in 3D dynamic holographic display.

  14. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.

  15. Optimal projector configuration design for 300-Mpixel multi-projection 3D display.

    PubMed

    Lee, Jin-Ho; Park, Juyong; Nam, Dongkyung; Choi, Seo Young; Park, Du-Sik; Kim, Chang Yeong

    2013-11-04

    To achieve an immersive natural 3D experience on a large screen, a 300-Mpixel multi-projection 3D display that has a 100-inch screen and a 40° viewing angle has been developed. To increase the number of rays emanating from each pixel to 300 in the horizontal direction, three hundred projectors were used. The projector configuration is an important issue in generating a high-quality 3D image, the luminance characteristics were analyzed and the design was optimized to minimize the variation in the brightness of projected images. The rows of the projector arrays were repeatedly changed according to a predetermined row interval and the projectors were arranged in an equi-angular pitch toward the constant central point. As a result, we acquired very smooth motion parallax images without discontinuity. There is no limit of viewing distance, so natural 3D images can be viewed from 2 m to over 20 m.

  16. Three-dimensional hologram display system

    NASA Technical Reports Server (NTRS)

    Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)

    2009-01-01

    The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.

  17. Field lens multiplexing in holographic 3D displays by using Bragg diffraction based volume gratings

    NASA Astrophysics Data System (ADS)

    Fütterer, G.

    2016-11-01

    Applications, which can profit from holographic 3D displays, are the visualization of 3D data, computer-integrated manufacturing, 3D teleconferencing and mobile infotainment. However, one problem of holographic 3D displays, which are e.g. based on space bandwidth limited reconstruction of wave segments, is to realize a small form factor. Another problem is to provide a reasonable large volume for the user placement, which means to provide an acceptable freedom of movement. Both problems should be solved without decreasing the image quality of virtual and real object points, which are generated within the 3D display volume. A diffractive optical design using thick hologram gratings, which can be referred to as Bragg diffraction based volume gratings, can provide a small form factor and high definition natural viewing experience of 3D objects. A large collimated wave can be provided by an anamorphic backlight unit. The complex valued spatial light modulator add local curvatures to the wave field he is illuminated with. The modulated wave field is focused onto to the user plane by using a volume grating based field lens. Active type liquid crystal gratings provide 1D fine tracking of approximately +/- 8° deg. Diffractive multiplex has to be implemented for each color and for a set of focus functions providing coarse tracking. Boundary conditions of the diffractive multiplexing are explained. This is done in regards to the display layout and by using the coupled wave theory (CWT). Aspects of diffractive cross talk and its suppression will be discussed including longitudinal apodized volume gratings.

  18. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  19. Multiview and light-field reconstruction algorithms for 360° multiple-projector-type 3D display.

    PubMed

    Zhong, Qing; Peng, Yifan; Li, Haifeng; Su, Chen; Shen, Weidong; Liu, Xu

    2013-07-01

    Both multiview and light-field reconstructions are proposed for a multiple-projector 3D display system. To compare the performance of the reconstruction algorithms in the same system, an optimized multiview reconstruction algorithm with sub-view-zones (SVZs) is proposed. The algorithm divided the conventional view zones in multiview display into several SVZs and allocates more view images. The optimized reconstruction algorithm unifies the conventional multiview reconstruction and light-field reconstruction algorithms, which can indicate the difference in performance when multiview reconstruction is changed to light-field reconstruction. A prototype consisting of 60 projectors with an arc diffuser as its screen is constructed to verify the algorithms. Comparison of different configurations of SVZs shows that light-field reconstruction provides large-scale 3D images with the smoothest motion parallax; thus it may provide better overall performance for large-scale 360° display than multiview reconstruction.

  20. Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.

    PubMed

    Saveljev, Vladimir; Kim, Sung-Kyu

    2015-10-05

    The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.

  1. Depth cues in human visual perception and their realization in 3D displays

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Häussler, Ralf; Fütterer, Gerald; Leister, Norbert

    2010-04-01

    Over the last decade, various technologies for visualizing three-dimensional (3D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric, or holographic type. Most of the current approaches utilize the conventional stereoscopic principle. But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue. This paper discusses the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth limitations of 3D displays from a physiological point of view.

  2. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  3. NGT-3D: a simple nematode cultivation system to study Caenorhabditis elegans biology in 3D

    PubMed Central

    Lee, Tong Young; Yoon, Kyoung-hye; Lee, Jin Il

    2016-01-01

    ABSTRACT The nematode Caenorhabditis elegans is one of the premier experimental model organisms today. In the laboratory, they display characteristic development, fertility, and behaviors in a two dimensional habitat. In nature, however, C. elegans is found in three dimensional environments such as rotting fruit. To investigate the biology of C. elegans in a 3D controlled environment we designed a nematode cultivation habitat which we term the nematode growth tube or NGT-3D. NGT-3D allows for the growth of both nematodes and the bacteria they consume. Worms show comparable rates of growth, reproduction and lifespan when bacterial colonies in the 3D matrix are abundant. However, when bacteria are sparse, growth and brood size fail to reach levels observed in standard 2D plates. Using NGT-3D we observe drastic deficits in fertility in a sensory mutant in 3D compared to 2D, and this defect was likely due to an inability to locate bacteria. Overall, NGT-3D will sharpen our understanding of nematode biology and allow scientists to investigate questions of nematode ecology and evolutionary fitness in the laboratory. PMID:26962047

  4. NGT-3D: a simple nematode cultivation system to study Caenorhabditis elegans biology in 3D.

    PubMed

    Lee, Tong Young; Yoon, Kyoung-Hye; Lee, Jin Il

    2016-04-15

    The nematodeCaenorhabditiselegansis one of the premier experimental model organisms today. In the laboratory, they display characteristic development, fertility, and behaviors in a two dimensional habitat. In nature, however,C. elegansis found in three dimensional environments such as rotting fruit. To investigate the biology ofC. elegansin a 3D controlled environment we designed a nematode cultivation habitat which we term the nematode growth tube or NGT-3D. NGT-3D allows for the growth of both nematodes and the bacteria they consume. Worms show comparable rates of growth, reproduction and lifespan when bacterial colonies in the 3D matrix are abundant. However, when bacteria are sparse, growth and brood size fail to reach levels observed in standard 2D plates. Using NGT-3D we observe drastic deficits in fertility in a sensory mutant in 3D compared to 2D, and this defect was likely due to an inability to locate bacteria. Overall, NGT-3D will sharpen our understanding of nematode biology and allow scientists to investigate questions of nematode ecology and evolutionary fitness in the laboratory.

  5. Determination of the optimum viewing distance for a multi-view auto-stereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Park, Inkyu; Kim, Sung-Kyu

    2014-09-22

    We present methodologies for determining the optimum viewing distance (OVD) for a multi-view auto-stereoscopic 3D display system with a parallax barrier. The OVD can be efficiently determined as the viewing distance where statistical deviation of centers of quasi-linear distributions of illuminance at central viewing zones is minimized using local areas of a display panel. This method can offer reduced computation time because it does not use the entire area of the display panel during a simulation, but still secures considerable accuracy. The method is verified in experiments, showing its applicability for efficient optical characterization.

  6. Does visual fatigue from 3D displays affect autonomic regulation and heart rhythm?

    PubMed

    Park, S; Won, M J; Mun, S; Lee, E C; Whang, M

    2014-02-15

    Most investigations into the negative effects of viewing stereoscopic 3D content on human health have addressed 3D visual fatigue and visually induced motion sickness (VIMS). Very few, however, have looked into changes in autonomic balance and heart rhythm, which are homeostatic factors that ought to be taken into consideration when assessing the overall impact of 3D video viewing on human health. In this study, 30 participants were randomly assigned to two groups: one group watching a 2D video, (2D-group) and the other watching a 3D video (3D-group). The subjects in the 3D-group showed significantly increased heart rates (HR), indicating arousal, and an increased VLF/HF (Very Low Frequency/High Frequency) ratio (a measure of autonomic balance), compared to those in the 2D-group, indicating that autonomic balance was not stable in the 3D-group. Additionally, a more disordered heart rhythm pattern and increasing heart rate (as determined by the R-peak to R-peak (RR) interval) was observed among subjects in the 3D-group compared to subjects in the 2D-group, further indicating that 3D viewing induces lasting activation of the sympathetic nervous system and interrupts autonomic balance.

  7. Stereoscopic contents authoring system for 3D DMB data service

    NASA Astrophysics Data System (ADS)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  8. Practical resolution requirements of measurement instruments for precise characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Collomb-Patton, Véronique; Bignon, Thibault

    2014-03-01

    Different ways to evaluate the optical performances of auto-stereoscopic 3D displays are reviewed. Special attention is paid to the crosstalk measurements that can be performed by measuring, either the precise angular emission at one or few locations on the display surface, or the full display surface emission from very specific locations in front of the display. Using measurements made in the two ways with different instruments on different auto-stereoscopic displays, we show that measurement instruments need to match the resolution of the human eye to obtain reliable results in both cases. Practical requirements in terms of angular resolution for viewing angle measurement instruments and in terms of spatial resolution for imaging instruments are derived and verified on practical examples.

  9. Subjective evaluation of a 3D videoconferencing system

    NASA Astrophysics Data System (ADS)

    Rizek, Hadi; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Johanson, Mathias

    2014-03-01

    A shortcoming of traditional videoconferencing systems is that they present the user with a flat, two-dimensional image of the remote participants. Recent advances in autostereoscopic display technology now make it possible to develop video conferencing systems supporting true binocular depth perception. In this paper, we present a subjective evaluation of a prototype multiview autostereoscopic video conferencing system and suggest a number of possible improvements based on the results. Whereas methods for subjective evaluation of traditional 2D videoconferencing systems are well established, the introduction of 3D requires an extension of the test procedures to assess the quality of depth perception. For this purpose, two depth-based test tasks have been designed and experiments have been conducted with test subjects comparing the 3D system to a conventional 2D video conferencing system. The outcome of the experiments show that the perception of depth is significantly improved in the 3D system, but the overall quality of experience is higher in the 2D system.

  10. 3D detection of obstacle distribution in walking guide system for the blind

    NASA Astrophysics Data System (ADS)

    Yoon, Myoung-Jong; Yu, Kee-Ho

    2007-12-01

    In this paper, the concept of a walking guide system with tactile display is introduced, and experiments of 3-D obstacle detection and tactile perception are carried out and analyzed. The algorithm of 3-D obstacle detection and the method of mapping the generated obstacle map and the tactile display device for the walking guide system are proposed. The experiment of the 3-D detection for the obstacle position using ultrasonic sensors is performed and estimated. Some design guidelines for a tactile display device that can display obstacle distribution is discussed.

  11. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  12. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  13. System status display information

    NASA Technical Reports Server (NTRS)

    Summers, L. G.; Erickson, J. B.

    1984-01-01

    The system Status Display is an electronic display system which provides the flight crew with enhanced capabilities for monitoring and managing aircraft systems. Guidelines for the design of the electronic system displays were established. The technical approach involved the application of a system engineering approach to the design of candidate displays and the evaluation of a Hernative concepts by part-task simulation. The system engineering and selection of candidate displays are covered.

  14. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  15. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  16. Multispectral polarization viewing angle analysis of circular polarized stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2010-02-01

    In this paper we propose a method to characterize polarization based stereoscopic 3D displays using multispectral Fourier optics viewing angle measurements. Full polarization analysis of the light emitted by the display in the full viewing cone is made at 31 wavelengths in the visible range. Vertical modulation of the polarization state is observed and explained by the position of the phase shift filter into the display structure. In addition, strong spectral dependence of the ellipticity and polarization degree is observed. These features come from the strong spectral dependence of the phase shift film and introduce some imperfections (color shifts and reduced contrast). Using the measured transmission properties of the two glasses filters, the resulting luminance across each filter is computed for left and right eye views. Monocular contrast for each eye and binocular contrasts are performed in the observer space, and Qualified Monocular and Binocular Viewing Spaces (QMVS and QBVS) can be deduced in the same way as auto-stereoscopic 3D displays allowing direct comparison of the performances.

  17. Characterizing the effects of droplines on target acquisition performance on a 3-D perspective display

    NASA Technical Reports Server (NTRS)

    Liao, Min-Ju; Johnson, Walter W.

    2004-01-01

    The present study investigated the effects of droplines on target acquisition performance on a 3-D perspective display in which participants were required to move a cursor into a target cube as quickly as possible. Participants' performance and coordination strategies were characterized using both Fitts' law and acquisition patterns of the 3 viewer-centered target display dimensions (azimuth, elevation, and range). Participants' movement trajectories were recorded and used to determine movement times for acquisitions of the entire target and of each of its display dimensions. The goodness of fit of the data to a modified Fitts function varied widely among participants, and the presence of droplines did not have observable impacts on the goodness of fit. However, droplines helped participants navigate via straighter paths and particularly benefited range dimension acquisition. A general preference for visually overlapping the target with the cursor prior to capturing the target was found. Potential applications of this research include the design of interactive 3-D perspective displays in which fast and accurate selection and manipulation of content residing at multiple ranges may be a challenge.

  18. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  19. The azimuth projection for the display of 3-D EEG data.

    PubMed

    Wu, Dan; Yao, Dezhong

    2007-12-01

    Electroencephalogram (EEG) is a scalp record of the neural electric activities of the brain. There are many kinds of methods to display the EEG data, such as a projective plane or the realistic head surface. In this work, one of the atlas projection methods, azimuth conformal projection, was tested and recommended as a new way of a planar EEG display. The method details are given and numerically compared with the normal projective plane display. The results indicate that the azimuth projection has many advantages: the transform is simple, convenient, and it can keep all the information. It shows all the information in the 3-D data within a projective plane without distinct shape change. Therefore, it can help to analyze the data effectively.

  20. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography.

    PubMed

    Hongen Liao; Dohi, Takeyoshi; Nomura, Keisuke

    2011-11-01

    We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.

  1. Multi-user 3D film on a time-multiplexed side-emission backlight system.

    PubMed

    Ting, Chih-Hung; Chang, Yu-Cheng; Chen, Chun-Ho; Huang, Yi-Pai; Tsai, Han-Wen

    2016-10-01

    The desirable features for a portable 3D display include displaying 2D and 3D images without resolution degradation for multiple users, a 2D/3D switchable functionality, and, in particular, a compact volume. To produce a portable 3D display with these desirable features, we propose here a multi-user 3D film combined with a side-emission backlight system that has a directional-sequential light distribution. According to the simulation and experimental results, the multi-user 3D film successfully uses an inverted trapezoid structure to separate the rays of each light source and increases the number of observers from one to three. Additionally, the specification of the inverted trapezoid structure can be determined via equations for different designated viewing positions of the side observer and for the ratio of light intensities for the central and side observers.

  2. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  3. Stereoscopic 3D display technique using spatiotemporal interlacing has improved spatial and temporal properties.

    PubMed

    Johnson, Paul V; Kim, Joohwan; Banks, Martin S

    2015-04-06

    Stereoscopic 3D (S3D) displays use spatial or temporal interlacing to send different images to the two eyes. Temporal interlacing delivers images to the left and right eyes alternately in time; it has high effective spatial resolution but is prone to temporal artifacts. Spatial interlacing delivers even pixel rows to one eye and odd rows to the other eye simultaneously; it is subject to spatial limitations such as reduced spatial resolution. We propose a spatiotemporal-interlacing protocol that interlaces the left- and right-eye views spatially, but with the rows being delivered to each eye alternating with each frame. We performed psychophysical experiments and found that flicker, motion artifacts, and depth distortion are substantially reduced relative to the temporal-interlacing protocol, and spatial resolution is better than in the spatial-interlacing protocol. Thus, the spatiotemporal-interlacing protocol retains the benefits of spatial and temporal interlacing while minimizing or even eliminating the drawbacks.

  4. High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display.

    PubMed

    Chang, Yu-Cheng; Jen, Tai-Hsiang; Ting, Chih-Hung; Huang, Yi-Pai

    2014-02-10

    A 2D/3D switchable and rotatable autostereoscopic display using a high-resistance liquid-crystal (Hi-R LC) lens array is investigated in this paper. Using high-resistance layers in an LC cell, a gradient electric-field distribution can be formed, which can provide a better lens-like shape of the refractive-index distribution. The advantages of the Hi-R LC lens array are its 2D/3D switchability, rotatability (in the horizontal and vertical directions), low driving voltage (~2 volts) and fast response (~0.6 second). In addition, the Hi-R LC lens array requires only a very simple fabrication process.

  5. Stereoscopic 3D display technique using spatiotemporal interlacing has improved spatial and temporal properties

    PubMed Central

    Johnson, Paul V.; Kim, Joohwan; Banks, Martin S.

    2015-01-01

    Stereoscopic 3D (S3D) displays use spatial or temporal interlacing to send different images to the two eyes. Temporal interlacing delivers images to the left and right eyes alternately in time; it has high effective spatial resolution but is prone to temporal artifacts. Spatial interlacing delivers even pixel rows to one eye and odd rows to the other eye simultaneously; it is subject to spatial limitations such as reduced spatial resolution. We propose a spatiotemporal-interlacing protocol that interlaces the left- and right-eye views spatially, but with the rows being delivered to each eye alternating with each frame. We performed psychophysical experiments and found that flicker, motion artifacts, and depth distortion are substantially reduced relative to the temporal-interlacing protocol, and spatial resolution is better than in the spatial-interlacing protocol. Thus, the spatiotemporal-interlacing protocol retains the benefits of spatial and temporal interlacing while minimizing or even eliminating the drawbacks. PMID:25968758

  6. Sound localization with head movement: implications for 3-d audio displays

    PubMed Central

    McAnally, Ken I.; Martin, Russell L.

    2014-01-01

    Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy. PMID:25161605

  7. Multi-user 3D display using a head tracker and RGB laser illumination source

    NASA Astrophysics Data System (ADS)

    Surman, Phil; Sexton, Ian; Hopf, Klaus; Bates, Richard; Lee, Wing Kai; Buckley, Edward

    2007-05-01

    A glasses-free (auto-stereoscopic) 3D display that will serve several viewers who have freedom of movement over a large viewing region is described. This operates on the principle of employing head position tracking to provide regions referred to as exit pupils that follow the positions ofthe viewers' eyes in order for appropriate left and right images to be seen. A non-intrusive multi-user head tracker controls the light sources of a specially designed backlight that illuminates a direct-view LCD.

  8. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  9. Development of high-frame-rate LED panel and its applications for stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Tsutsumi, M.; Yamamoto, R.; Kajimoto, K.; Suyama, S.

    2011-03-01

    In this paper, we report development of a high-frame-rate LED display. Full-color images are refreshed at 480 frames per second. In order to transmit such a high frame-rate signal via conventional 120-Hz DVI, we have introduced a spatiotemporal mapping of image signal. A processor of LED image signal and FPGAs in LED modules have been reprogrammed so that four adjacent pixels in the input image are converted into successive four fields. The pitch of LED panel is 20 mm. The developed 480-fps LED display is utilized for stereoscopic 3D display by use of parallax barrier. The horizontal resolution of a viewed image decreases to one-half by the parallax barrier. This degradation is critical for LED because the pitch of LED displays is as large as tens of times of other flat panel displays. We have conducted experiments to improve quality of the viewed image through the parallax barrier. The improvement is based on interpolation by afterimages. It is shown that the HFR LED provides detailed afterimages. Furthermore, the HFR LED has been utilized for unconscious imaging, which provide a sensation of discovery of conscious visual information from unconscious images.

  10. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  11. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  12. Digital video display system

    NASA Technical Reports Server (NTRS)

    Zygielbaum, A. I.; Martin, W. L.; Engle, A.

    1973-01-01

    System displays image data in real time on 120,000-element raster scan with 2, 4, or 8 shades of grey. Designed for displaying planetary range Doppler data, system can be used for X-Y plotting, displaying alphanumerics, and providing image animation.

  13. Spatial orientation in 3-D desktop displays: using rooms for organizing information.

    PubMed

    Colle, Herbert A; Reid, Gary B

    2003-01-01

    Understanding how spatial knowledge is acquired is important for spatial navigation and for improving the design of 3-D perspective interfaces. Configural spatial knowledge of object locations inside rooms is learned rapidly and easily (Colle & Reid, 1998), possibly because rooms afford local viewing in which objects are directly viewed or, alternatively, because of their structural features. The local viewing hypothesis predicts that the layout of objects outside of rooms also should be rapidly acquired when walls are removed and rooms are sufficiently close that participants can directly view and identify objects. It was evaluated using pointing and sketch map measures of configural knowledge with and without walls by varying distance, lighting levels, and observation instructions. Although within-room spatial knowledge was uniformly good, local viewing was not sufficient for improving spatial knowledge of objects in different rooms. Implications for navigation and 3-D interface design are discussed. Actual or potential applications of this research include the design of user interfaces, especially interfaces with 3-D displays.

  14. Mitral valve analysis using a novel 3D holographic display: a feasibility study of 3D ultrasound data converted to a holographic screen.

    PubMed

    Beitnes, Jan Otto; Klæboe, Lars Gunnar; Karlsen, Jørn Skaarud; Urheim, Stig

    2015-02-01

    The aim of the present study was to test the feasibility of analyzing 3D ultrasound data on a novel holographic display. An increasing number of mini-invasive procedures for mitral valve repair require more effective visualization to improve patient safety and speed of procedures. A novel 3D holographic display has been developed and may have the potential to guide interventional cardiac procedures in the near future. Forty patients with degenerative mitral valve disease were analyzed. All had complete 2D transthoracic (TTE) and transoesophageal (TEE) echocardiographic examinations. In addition, 3D TTE of the mitral valve was obtained and recordings were converted from the echo machine to the holographic screen. Visual inspection of the mitral valve during surgery or TEE served as the gold standard. 240 segments were analyzed by 2 independent observers. A total of 53 segments were prolapsing. The majority included P2 (31), the remaining located at A2 (8), A3 (6), P3 (5), P1 (2) and A1 (1). The sensitivity and specificity of the 3D display was 87 and 99 %, respectively (observer I), and for observer II 85 and 97 %, respectively. The accuracies and precisions were 96.7 and 97.9 %, respectively, (observer I), 94.3 and 88.2 % (observer II), and inter-observer agreement was 0.954 with Cohen's Kappa 0.86. We were able to convert 3D ultrasound data to the holographic display. A very high accuracy and precision was shown, demonstrating the feasibility of analyzing 3D echo of the mitral valve on the holographic screen.

  15. Subsampling models and anti-alias filters for 3-D automultiscopic displays.

    PubMed

    Konrad, Janusz; Agniel, Philippe

    2006-01-01

    A new type of three-dimensional (3-D) display recently introduced on the market holds great promise for the future of 3-D visualization, communication, and entertainment. This so-called automultiscopic display can deliver multiple views without glasses, thus allowing a limited "look-around" (correct motion-parallax). Central to this technology is the process of multiplexing several views into a single viewable image. This multiplexing is a complex process involving irregular subsampling of the original views. If not preceded by low-pass filtering, it results in aliasing that leads to texture as well as depth distortions. In order to eliminate this aliasing, we propose to model the multiplexing process with lattices, find their parameters and then design optimal anti-alias filters. To this effect, we use multidimensional sampling theory and basic optimization tools. We derive optimal anti-alias filters for a specific automultiscopic monitor using three models: the orthogonal lattice, the nonorthogonal lattice, and the union of shifted lattices. In the first case, the resulting separable low-pass filter offers significant aliasing reduction that is further improved by hexagonal-passband low-pass filter for the nonorthogonal lattice model. A more accurate model is obtained using union of shifted lattices, but due to the complex nature of repeated spectra, practical filters designed in this case offer no additional improvement. We also describe a practical method to design finite-precision, low-complexity filters that can be implemented using modern graphics cards.

  16. Residual lens effects in 2D mode of auto-stereoscopic lenticular-based switchable 2D/3D displays

    NASA Astrophysics Data System (ADS)

    Sluijter, M.; IJzerman, W. L.; de Boer, D. K. G.; de Zwart, S. T.

    2006-04-01

    We discuss residual lens effects in multi-view switchable auto-stereoscopic lenticular-based 2D/3D displays. With the introduction of a switchable lenticular, it is possible to switch between a 2D mode and a 3D mode. The 2D mode displays conventional content, whereas the 3D mode provides the sensation of depth to the viewer. The uniformity of a display in the 2D mode is quantified by the quality parameter modulation depth. In order to reduce the modulation depth in the 2D mode, birefringent lens plates are investigated analytically and numerically, by ray tracing. We can conclude that the modulation depth in the 2D mode can be substantially decreased by using birefringent lens plates with a perfect index match between lens material and lens plate. Birefringent lens plates do not disturb the 3D performance of a switchable 2D/3D display.

  17. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  18. Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio

    NASA Astrophysics Data System (ADS)

    Zeng, Xiang-Yao; Zhou, Xiong-Tu; Guo, Tai-Liang; Yang, Lan; Chen, En-Guo; Zhang, Yong-Ai

    2017-04-01

    Autostereoscopic 3D-LED displays using parallax barriers have several advantages. However, conventional designs do not consider the black stripes of regular LED panels. These cause immeasurable crosstalk owing to excess light from adjacent sub-pixels separated by the panels. To reduce the crosstalk in large-scale displays, we design a barrier in which the black-stripe occupation ratio is defined to quantify the crosstalk level in the LED system. A prototype is assembled and analyzed based on a three-in-one pixel LED-chip panel for a dual-viewpoint display. The improved parallax barrier meets the design requirements and achieves a low crosstalk level. Simulation and experiment results verify the effectiveness of the crosstalk-reduced design.

  19. Three-dimensional display modes for CT colonography: conventional 3D virtual colonoscopy versus unfolded cube projection.

    PubMed

    Vos, Frans M; van Gelder, Rogier E; Serlie, Iwo W O; Florie, Jasper; Nio, C Yung; Glas, Afina S; Post, Frits H; Truyen, Roel; Gerritsen, Frans A; Stoker, Jaap

    2003-09-01

    The authors compared a conventional two-directional three-dimensional (3D) display for computed tomography (CT) colonography with an alternative method they developed on the basis of time efficiency and surface visibility. With the conventional technique, 3D ante- and retrograde cine loops were obtained (hereafter, conventional 3D). With the alternative method, six projections were obtained at 90 degrees viewing angles (unfolded cube display). Mean evaluation time per patient with the conventional 3D display was significantly longer than that with the unfolded cube display. With the conventional 3D method, 93.8% of the colon surface came into view; with the unfolded cube method, 99.5% of the colon surface came into view. Sensitivity and specificity were not significantly different between the two methods. Agreements between observers were kappa = 0.605 for conventional 3D display and kappa = 0.692 for unfolded cube display. Consequently, the latter method enhances the 3D endoluminal display with improved time efficiency and higher surface visibility.

  20. Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens.

    PubMed

    Shen, Xin; Wang, Yu-Jen; Chen, Hung-Shan; Xiao, Xiao; Lin, Yi-Hsin; Javidi, Bahram

    2015-02-15

    We present a three dimensional (3D) micro integral imaging display system with extended depth of focus by using a polarized bifocal liquid crystal lens. This lens and other optical components are combined as the relay optical element. The focal length of the relay optical element can be controlled to project an elemental image array in multiple positions with various lenslet image planes, by applying different voltages to the liquid crystal lens. The depth of focus of the proposed system can therefore be extended. The feasibility of our proposed system is experimentally demonstrated. In our experiments, the depth of focus of the display system is extended from 3.82 to 109.43 mm.

  1. EEG-based cognitive load of processing events in 3D virtual worlds is lower than processing events in 2D displays.

    PubMed

    Dan, Alex; Reiner, Miriam

    2016-08-31

    Interacting with 2D displays, such as computer screens, smartphones, and TV, is currently a part of our daily routine; however, our visual system is built for processing 3D worlds. We examined the cognitive load associated with a simple and a complex task of learning paper-folding (origami) by observing 2D or stereoscopic 3D displays. While connected to an electroencephalogram (EEG) system, participants watched a 2D video of an instructor demonstrating the paper-folding tasks, followed by a stereoscopic 3D projection of the same instructor (a digital avatar) illustrating identical tasks. We recorded the power of alpha and theta oscillations and calculated the cognitive load index (CLI) as the ratio of the average power of frontal theta (Fz.) and parietal alpha (Pz). The results showed a significantly higher cognitive load index associated with processing the 2D projection as compared to the 3D projection; additionally, changes in the average theta Fz power were larger for the 2D conditions as compared to the 3D conditions, while alpha average Pz power values were similar for 2D and 3D conditions for the less complex task and higher in the 3D state for the more complex task. The cognitive load index was lower for the easier task and higher for the more complex task in 2D and 3D. In addition, participants with lower spatial abilities benefited more from the 3D compared to the 2D display. These findings have implications for understanding cognitive processing associated with 2D and 3D worlds and for employing stereoscopic 3D technology over 2D displays in designing emerging virtual and augmented reality applications.

  2. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance.

    PubMed

    Qiu, Jimmy; Hope, Andrew J; Cho, B C John; Sharpe, Michael B; Dickie, Colleen I; DaCosta, Ralph S; Jaffray, David A; Weersink, Robert A

    2012-10-21

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ∼2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  3. Membrane-mirror-based display for viewing 2D and 3D images

    NASA Astrophysics Data System (ADS)

    McKay, Stuart; Mason, Steven; Mair, Leslie S.; Waddell, Peter; Fraser, Simon M.

    1999-05-01

    Stretchable Membrane Mirrors (SMMs) have been developed at the University of Strathclyde as a cheap, lightweight and variable focal length alternative to conventional fixed- curvature glass based optics. A SMM uses a thin sheet of aluminized polyester film which is stretched over a specially shaped frame, forming an airtight cavity behind the membrane. Removal of air from that cavity causes the resulting air pressure difference to force the membrane back into a concave shape. Controlling the pressure difference acting over the membrane now controls the curvature or f/No. of the mirror. Mirrors from 0.15-m to 1.2-m in diameter have been constructed at the University of Strathclyde. The use of lenses and mirrors to project real images in space is perhaps one of the simplest forms of 3D display. When using conventional optics however, there are severe financial restrictions on what size of image forming element may be used, hence the appeal of a SMM. The mirrors have been used both as image forming elements and directional screens in volumetric, stereoscopic and large format simulator displays. It was found that the use of these specular reflecting surfaces greatly enhances the perceived image quality of the resulting magnified display.

  4. Reproducibility of crosstalk measurements on active glasses 3D LCD displays based on temporal characterization

    NASA Astrophysics Data System (ADS)

    Tourancheau, Sylvain; Wang, Kun; Bułat, Jarosław; Cousseau, Romain; Janowski, Lucjan; Brunnström, Kjell; Barkowsky, Marcus

    2012-03-01

    Crosstalk is one of the main display-related perceptual factors degrading image quality and causing visual discomfort on 3D-displays. It causes visual artifacts such as ghosting effects, blurring, and lack of color fidelity which are considerably annoying and can lead to difficulties to fuse stereoscopic images. On stereoscopic LCD with shutter-glasses, crosstalk is mainly due to dynamic temporal aspects: imprecise target luminance (highly dependent on the combination of left-view and right-view pixel color values in disparity regions) and synchronization issues between shutter-glasses and LCD. These different factors influence largely the reproducibility of crosstalk measurements across laboratories and need to be evaluated in several different locations involving similar and differing conditions. In this paper we propose a fast and reproducible measurement procedure for crosstalk based on high-frequency temporal measurements of both display and shutter responses. It permits to fully characterize crosstalk for any right/left color combination and at any spatial position on the screen. Such a reliable objective crosstalk measurement method at several spatial positions is considered a mandatory prerequisite for evaluating the perceptual influence of crosstalk in further subjective studies.

  5. System status display evaluation

    NASA Technical Reports Server (NTRS)

    Summers, Leland G.

    1988-01-01

    The System Status Display is an electronic display system which provides the crew with an enhanced capability for monitoring and managing the aircraft systems. A flight simulation in a fixed base cockpit simulator was used to evaluate alternative design concepts for this display system. The alternative concepts included pictorial versus alphanumeric text formats, multifunction versus dedicated controls, and integration of the procedures with the system status information versus paper checklists. Twelve pilots manually flew approach patterns with the different concepts. System malfunctions occurred which required the pilots to respond to the alert by reconfiguring the system. The pictorial display, the multifunction control interfaces collocated with the system display, and the procedures integrated with the status information all had shorter event processing times and lower subjective workloads.

  6. Assessment of 3D Viewers for the Display of Interactive Documents in the Learning of Graphic Engineering

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia

    2012-01-01

    The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…

  7. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  8. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  9. 3D deterministic lateral displacement separation systems

    NASA Astrophysics Data System (ADS)

    Du, Siqi; Drazer, German

    2016-11-01

    We present a simple modification to enhance the separation ability of deterministic lateral displacement (DLD) systems by expanding the two-dimensional nature of these devices and driving the particles into size-dependent, fully three-dimensional trajectories. Specifically, we drive the particles through an array of long cylindrical posts, such that they not only move parallel to the basal plane of the posts as in traditional two-dimensional DLD systems (in-plane motion), but also along the axial direction of the solid posts (out-of-plane motion). We show that the (projected) in-plane motion of the particles is completely analogous to that observed in 2D-DLD systems and the observed trajectories can be predicted based on a model developed in the 2D case. More importantly, we analyze the particles out-of-plane motion and observe significant differences in the net displacement depending on particle size. Therefore, taking advantage of both the in-plane and out-of-plane motion of the particles, it is possible to achieve the simultaneous fractionation of a polydisperse suspension into multiple streams. We also discuss other modifications to the obstacle array and driving forces that could enhance separation in microfluidic devices.

  10. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  11. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  12. Open-GL-based stereo system for 3D measurements

    NASA Astrophysics Data System (ADS)

    Boochs, Frank; Gehrhoff, Anja; Neifer, Markus

    2000-05-01

    A stereo system designed and used for the measurement of 3D- coordinates within metric stereo image pairs will be presented. First, the motivation for the development is shown, allowing to evaluate stereo images. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing the measuring functionality required. The presented stereo system is based on PC-hardware equipped with a graphic board and uses an object oriented programming technique. The specific needs of a measuring system are shown and the corresponding requirements which have to be met by the system. The key role of OpenGL is described, which supplies some elementary graphic functions, being directly supported by graphic boards and thus provides the performance needed. Further important aspects as modularity and hardware independence and their value for the solution are shown. Finally some sample functions concerned with image display and handling are presented in more detail.

  13. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  14. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics.

    PubMed

    Yoshino, Masanori; Saito, Toki; Kin, Taichi; Nakagawa, Daichi; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

  15. Arctic Research Mapping Application 3D Geobrowser: Accessing and Displaying Arctic Information From the Desktop to the Web

    NASA Astrophysics Data System (ADS)

    Johnson, G. W.; Gonzalez, J.; Brady, J. J.; Gaylord, A.; Manley, W. F.; Cody, R.; Dover, M.; Score, R.; Garcia-Lavigne, D.; Tweedie, C. E.

    2009-12-01

    ARMAP 3D allows users to dynamically interact with information about U.S. federally funded research projects in the Arctic. This virtual globe allows users to explore data maintained in the Arctic Research & Logistics Support System (ARLSS) database providing a very valuable visual tool for science management and logistical planning, ascertaining who is doing what type of research and where. Users can “fly to” study sites, view receding glaciers in 3D and access linked reports about specific projects. Custom “Search” tasks have been developed to query by researcher name, discipline, funding program, place names and year and display results on the globe with links to detailed reports. ARMAP 3D was created with ESRI’s free ArcGIS Explorer (AGX) new build 900 providing an updated application from build 500. AGX applications provide users the ability to integrate their own spatial data on various data layers provided by ArcOnline (http://resources.esri.com/arcgisonlineservices). Users can add many types of data including OGC web services without any special data translators or costly software. ARMAP 3D is part of the ARMAP suite (http://armap.org), a collection of applications that support Arctic science tools for users of various levels of technical ability to explore information about field-based research in the Arctic. ARMAP is funded by the National Science Foundation Office of Polar Programs Arctic Sciences Division and is a collaborative development effort between the Systems Ecology Lab at the University of Texas at El Paso, Nuna Technologies, the INSTAAR QGIS Laboratory, and CH2M HILL Polar Services.

  16. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©.

  17. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  18. Fabrication of Large-Scale Microlens Arrays Based on Screen Printing for Integral Imaging 3D Display.

    PubMed

    Zhou, Xiongtu; Peng, Yuyan; Peng, Rong; Zeng, Xiangyao; Zhang, Yong-Ai; Guo, Tailiang

    2016-09-14

    The low-cost large-scale fabrication of microlens arrays (MLAs) with precise alignment, great uniformity of focusing, and good converging performance are of great importance for integral imaging 3D display. In this work, a simple and effective method for large-scale polymer microlens arrays using screen printing has been successfully presented. The results show that the MLAs possess high-quality surface morphology and excellent optical performances. Furthermore, the microlens' shape and size, i.e., the diameter, the height, and the distance between two adjacent microlenses of the MLAs can be easily controlled by modifying the reflowing time and the size of open apertures of the screen. MLAs with the neighboring microlenses almost tangent can be achieved under suitable size of open apertures of the screen and reflowing time, which can remarkably reduce the color moiré patterns caused by the stray light between the blank areas of the MLAs in the integral imaging 3D display system, exhibiting much better reconstruction performance.

  19. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  20. On the Uncertain Future of the Volumetric 3D Display Paradigm

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2017-06-01

    Volumetric displays permit electronically processed images to be depicted within a transparent physical volume and enable a range of cues to depth to be inherently associated with image content. Further, images can be viewed directly by multiple simultaneous observers who are able to change vantage positions in a natural way. On the basis of research to date, we assume that the technologies needed to implement useful volumetric displays able to support translucent image formation are available. Consequently, in this paper we review aspects of the volumetric paradigm and identify important issues which have, to date, precluded their successful commercialization. Potentially advantageous characteristics are outlined and demonstrate that significant research is still needed in order to overcome barriers which continue to hamper the effective exploitation of this display modality. Given the recent resurgence of interest in developing commercially viable general purpose volumetric systems, this discussion is of particular relevance.

  1. Synthetic 3D multicellular systems for drug development.

    PubMed

    Rimann, Markus; Graf-Hausner, Ursula

    2012-10-01

    Since the 1970s, the limitations of two dimensional (2D) cell culture and the relevance of appropriate three dimensional (3D) cell systems have become increasingly evident. Extensive effort has thus been made to move cells from a flat world to a 3D environment. While 3D cell culture technologies are meanwhile widely used in academia, 2D culture technologies are still entrenched in the (pharmaceutical) industry for most kind of cell-based efficacy and toxicology tests. However, 3D cell culture technologies will certainly become more applicable if biological relevance, reproducibility and high throughput can be assured at acceptable costs. Most recent innovations and developments clearly indicate that the transition from 2D to 3D cell culture for industrial purposes, for example, drug development is simply a question of time.

  2. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  3. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  4. A 3D digital medical photography system in paediatric medicine.

    PubMed

    Williams, Susanne K; Ellis, Lloyd A; Williams, Gigi

    2008-01-01

    In 2004, traditional clinical photography services at the Educational Resource Centre were extended using new technology. This paper describes the establishment of a 3D digital imaging system in a paediatric setting at the Royal Children's Hospital, Melbourne.

  5. Gastric Contraction Imaging System Using a 3-D Endoscope.

    PubMed

    Yoshimoto, Kayo; Yamada, Kenji; Watabe, Kenji; Takeda, Maki; Nishimura, Takahiro; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Nishida, Tsutomu; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko

    2014-01-01

    This paper presents a gastric contraction imaging system for assessment of gastric motility using a 3-D endoscope. Gastrointestinal diseases are mainly based on morphological abnormalities. However, gastrointestinal symptoms are sometimes apparent without visible abnormalities. One of the major factors for these diseases is abnormal gastrointestinal motility. For assessment of gastric motility, a gastric motility imaging system is needed. To assess the dynamic motility of the stomach, the proposed system measures 3-D gastric contractions derived from a 3-D profile of the stomach wall obtained with a developed 3-D endoscope. After obtaining contraction waves, their frequency, amplitude, and speed of propagation can be calculated using a Gaussian function. The proposed system was evaluated for 3-D measurements of several objects with known geometries. The results showed that the surface profiles could be obtained with an error of [Formula: see text] of the distance between two different points on images. Subsequently, we evaluated the validity of a prototype system using a wave simulated model. In the experiment, the amplitude and position of waves could be measured with 1-mm accuracy. The present results suggest that the proposed system can measure the speed and amplitude of contractions. This system has low invasiveness and can assess the motility of the stomach wall directly in a 3-D manner. Our method can be used for examination of gastric morphological and functional abnormalities.

  6. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination

  7. US-CT 3D dual imaging by mutual display of the same sections for depicting minor changes in hepatocellular carcinoma.

    PubMed

    Fukuda, Hiroyuki; Ito, Ryu; Ohto, Masao; Sakamoto, Akio; Otsuka, Masayuki; Togawa, Akira; Miyazaki, Masaru; Yamagata, Hitoshi

    2012-09-01

    The purpose of this study was to evaluate the usefulness of ultrasound-computed tomography (US-CT) 3D dual imaging for the detection of small extranodular growths of hepatocellular carcinoma (HCC). The clinical and pathological profiles of 10 patients with single nodular type HCC with extranodular growth (extranodular growth) who underwent a hepatectomy were evaluated using two-dimensional (2D) ultrasonography (US), three-dimensional (3D) US, 3D computed tomography (CT) and 3D US-CT dual images. Raw 3D data was converted to DICOM (Digital Imaging and Communication in Medicine) data using Echo to CT (Toshiba Medical Systems Corp., Tokyo, Japan), and the 3D DICOM data was directly transferred to the image analysis system (ZioM900, ZIOSOFT Inc., Tokyo, Japan). By inputting the angle number (x, y, z) of the 3D CT volume data into the ZioM900, multiplanar reconstruction (MPR) images of the 3D CT data were displayed in a manner such that they resembled the conventional US images. Eleven extranodular growths were detected pathologically in 10 cases. 2D US was capable of depicting only 2 of the 11 extranodular growths. 3D CT was capable of depicting 4 of the 11 extranodular growths. On the other hand, 3D US was capable of depicting 10 of the 11 extranodular growths, and 3D US-CT dual images, which enable the dual analysis of the CT and US planes, revealed all 11 extranodular growths. In conclusion, US-CT 3D dual imaging may be useful for the detection of small extranodular growths.

  8. The impact of computer display height and desk design on 3D posture during information technology work by young adults.

    PubMed

    Straker, L; Burgess-Limerick, R; Pollock, C; Murray, K; Netto, K; Coleman, J; Skoss, R

    2008-04-01

    Computer display height and desk design to allow forearm support are two critical design features of workstations for information technology tasks. However there is currently no 3D description of head and neck posture with different computer display heights and no direct comparison to paper based information technology tasks. There is also inconsistent evidence on the effect of forearm support on posture and no evidence on whether these features interact. This study compared the 3D head, neck and upper limb postures of 18 male and 18 female young adults whilst working with different display and desk design conditions. There was no substantial interaction between display height and desk design. Lower display heights increased head and neck flexion with more spinal asymmetry when working with paper. The curved desk, designed to provide forearm support, increased scapula elevation/protraction and shoulder flexion/abduction.

  9. 3D objects enlargement technique using an optical system and multiple SLMs for electronic holography.

    PubMed

    Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori; Oi, Ryutaro; Kurita, Taiichiro

    2012-09-10

    One problem in electronic holography, which is caused by the display performance of spatial light modulators (SLM), is that the size of reconstructed 3D objects is small. Although methods for increasing the size using multiple SLMs have been considered, they typically had the problem that some parts of 3D objects were missing as a result of the gap between adjacent SLMs or 3D objects lost the vertical parallax. This paper proposes a method of resolving this problem by locating an optical system containing a lens array and other components in front of multiple SLMs. We used an optical system and 9 SLMs to construct a device equivalent to an SLM with approximately 74,600,000 pixels and used this to reconstruct 3D objects in both the horizontal and vertical parallax with an image size of 63 mm without losing any part of 3D objects.

  10. Quantitative measurement of eyestrain on 3D stereoscopic display considering the eye foveation model and edge information.

    PubMed

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-05-15

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  11. Expanding the degree of freedom of observation on depth-direction by the triple-separated slanted parallax barrier in autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Lee, Kwang-Hoon; Choe, Yeong-Seon; Lee, Dong-Kil; Kim, Yang-Gyu; Park, Youngsik; Park, Min-Chul

    2013-05-01

    Autostereoscopic multi-views 3D display system has a narrow freedom of degrees to the observational directions such as horizontal and perpendicular direction to the display plane than the glasses on type. In this paper, we proposed an innovative method that expanding a width of formed viewing zone on the depth direction keeping with the number of views on horizontal direction by using the triple segmented-slanted parallax barrier (TS-SPB) in the glasses-off type of 3D display. The validity of the proposal is verified by optical simulation based on the environment similar to an actual case. In benefits, the maximum number of views to display on horizontal direction is to be 2n and the width of viewing zone on depth direction is to be increased up to 3.36 times compared to the existing one-layered parallax barrier system.

  12. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  13. Proposed traceable structural resolution protocols for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David; Beraldin, J.-Angelo; Cournoyer, Luc; Carrier, Benjamin; Blais, François

    2009-08-01

    A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.

  14. Integral imaging-based large-scale full-color 3-D display of holographic data by using a commercial LCD panel.

    PubMed

    Dong, Xiao-Bin; Ai, Ling-Yu; Kim, Eun-Soo

    2016-02-22

    We propose a new type of integral imaging-based large-scale full-color three-dimensional (3-D) display of holographic data based on direct ray-optical conversion of holographic data into elemental images (EIs). In the proposed system, a 3-D scene is modeled as a collection of depth-sliced object images (DOIs), and three-color hologram patterns for that scene are generated by interfering each color DOI with a reference beam, and summing them all based on Fresnel convolution integrals. From these hologram patterns, full-color DOIs are reconstructed, and converted into EIs using a ray mapping-based direct pickup process. These EIs are then optically reconstructed to be a full-color 3-D scene with perspectives on the depth-priority integral imaging (DPII)-based 3-D display system employing a large-scale LCD panel. Experiments with a test video confirm the feasibility of the proposed system in the practical application fields of large-scale holographic 3-D displays.

  15. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  16. Multiview holographic 3D dynamic display by combining a nano-grating patterned phase plate and LCD.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Ye, Yan; Chen, Xiangyu; Chen, Linsen

    2017-01-23

    Limited by the refreshable data volume of commercial spatial light modulator (SLM), electronic holography can hardly provide satisfactory 3D live video. Here we propose a holography based multiview 3D display by separating the phase information of a lightfield from the amplitude information. In this paper, the phase information was recorded by a 5.5-inch 4-view phase plate with a full coverage of pixelated nano-grating arrays. Because only amplitude information need to be updated, the refreshing data volume in a 3D video display was significantly reduced. A 5.5 inch TFT-LCD with a pixel size of 95 μm was used to modulate the amplitude information of a lightfield at a rate of 20 frames per second. To avoid crosstalk between viewing points, the spatial frequency and orientation of each nano-grating in the phase plate was fine tuned. As a result, the transmission light converged to the viewing points. The angular divergence was measured to be 1.02 degrees (FWHM) by average, slightly larger than the diffraction limit of 0.94 degrees. By refreshing the LCD, a series of animated sequential 3D images were dynamically presented at 4 viewing points. The resolution of each view was 640 × 360. Images for each viewing point were well separated and no ghost images were observed. The resolution of the image and the refreshing rate in the 3D dynamic display can be easily improved by employing another SLM. The recoded 3D videos showed the great potential of the proposed holographic 3D display to be used in mobile electronics.

  17. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  18. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  19. Full-parallax 3D display from single-shot Kinect capture

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Dorado, Adrián.; Saavedra, Genaro; Martínez-Corral, Manuel; Shin, Donghak; Lee, Byung-Gook

    2015-05-01

    We propose the fusion between two concepts that are very successful in the area of 3D imaging and sensing. Kinect technology permits the registration, in real time, but with low resolution, of accurate depth maps of big, opaque, diffusing 3D scenes. Our proposal consists on transforming the sampled depth map, provided by the Kinect technology, into an array of microimages whose position; pitch and resolution are in good accordance with the characteristics of an integral- imaging monitor. By projecting this information onto such monitor we are able to produce 3D images with continuous perspective and full parallax.

  20. Visual Semantic Based 3D Video Retrieval System Using HDFS

    PubMed Central

    Kumar, C.Ranjith; Suguna, S.

    2016-01-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy. PMID:28003793

  1. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  2. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  3. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  4. Systems biology in 3D space--enter the morphome.

    PubMed

    Lucocq, John M; Mayhew, Terry M; Schwab, Yannick; Steyer, Anna M; Hacker, Christian

    2015-02-01

    Systems-based understanding of living organisms depends on acquiring huge datasets from arrays of genes, transcripts, proteins, and lipids. These data, referred to as 'omes', are assembled using 'omics' methodologies. Currently a comprehensive, quantitative view of cellular and organellar systems in 3D space at nanoscale/molecular resolution is missing. We introduce here the term 'morphome' for the distribution of living matter within a 3D biological system, and 'morphomics' for methods of collecting 3D data systematically and quantitatively. A sampling-based approach termed stereology currently provides rapid, precise, and minimally biased morphomics. We propose that stereology solves the 'big data' problem posed by emerging wide-scale electron microscopy (EM) and can establish quantitative links between the newer nanoimaging platforms such as electron tomography, cryo-EM, and correlative microscopy.

  5. Influence of limited random-phase of objects on the image quality of 3D holographic display

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  6. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  7. Extensible 3D (X3D) Graphics Clouds for Geographic Information Systems

    DTIC Science & Technology

    2008-03-01

    browser such as Microsoft Internet Explorer or Netscape using an X3D or VRML supporting plug-in. The benefits of diverse support can cause...typing model output with a particular method of 3D cloud production. Data-driven adaptation and production of cloud models for web -based delivery...and production of cloud models for web -based delivery is an achievable capability given continued research and development. vi THIS PAGE

  8. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  9. A 3-D measurement system using object-oriented FORTH

    SciTech Connect

    Butterfield, K.B.

    1989-01-01

    Discussed is a system for storing 3-D measurements of points that relates the coordinate system of the measurement device to the global coordinate system. The program described here used object-oriented FORTH to store the measured points as sons of the measuring device location. Conversion of local coordinates to absolute coordinates is performed by passing messages to the point objects. Modifications to the object-oriented FORTH system are also described. 1 ref.

  10. Adipose tissue-derived stem cells display a proangiogenic phenotype on 3D scaffolds.

    PubMed

    Neofytou, Evgenios A; Chang, Edwin; Patlola, Bhagat; Joubert, Lydia-Marie; Rajadas, Jayakumar; Gambhir, Sanjiv S; Cheng, Zhen; Robbins, Robert C; Beygui, Ramin E

    2011-09-01

    Ischemic heart disease is the leading cause of death worldwide. Recent studies suggest that adipose tissue-derived stem cells (ASCs) can be used as a potential source for cardiovascular tissue engineering due to their ability to differentiate along the cardiovascular lineage and to adopt a proangiogenic phenotype. To understand better ASCs' biology, we used a novel 3D culture device. ASCs' and b.END-3 endothelial cell proliferation, migration, and vessel morphogenesis were significantly enhanced compared to 2D culturing techniques. ASCs were isolated from inguinal fat pads of 6-week-old GFP+/BLI+ mice. Early passage ASCs cells (P3-P4), PKH26-labeled murine b.END-3 cells or a co-culture of ASCs and b.END-3 cells were seeded at a density of 1 × 10(5) on three different surface configurations: (a) a 2D surface of tissue culture plastic, (b) Matrigel, and (c) a highly porous 3D scaffold fabricated from inert polystyrene. VEGF expression, cell proliferation, and tubulization, were assessed using optical microscopy, fluorescence microscopy, 3D confocal microscopy, and SEM imaging (n = 6). Increased VEGF levels were seen in conditioned media harvested from co-cultures of ASCs and b.END-3 on either Matrigel or a 3D matrix. Fluorescence, confocal, SEM, bioluminescence revealed improved cell, proliferation, and tubule formation for cells seeded on the 3D polystyrene matrix. Collectively, these data demonstrate that co-culturing ASCs with endothelial cells in a 3D matrix environment enable us to generate prevascularized tissue-engineered constructs. This can potentially help us to surpass the tissue thickness limitations faced by the tissue engineering community today.

  11. 3-dimensional (3D) fabricated polymer based drug delivery systems.

    PubMed

    Moulton, Simon E; Wallace, Gordon G

    2014-11-10

    Drug delivery from 3-dimensional (3D) structures is a rapidly growing area of research. It is essential to achieve structures wherein drug stability is ensured, the drug loading capacity is appropriate and the desired controlled release profile can be attained. Attention must also be paid to the development of appropriate fabrication machinery that allows 3D drug delivery systems (DDS) to be produced in a simple, reliable and reproducible manner. The range of fabrication methods currently being used to form 3D DDSs include electrospinning (solution and melt), wet-spinning and printing (3-dimensional). The use of these techniques enables production of DDSs from the macro-scale down to the nano-scale. This article reviews progress in these fabrication techniques to form DDSs that possess desirable drug delivery kinetics for a wide range of applications.

  12. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  13. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  14. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  15. ProteinVista: a fast molecular visualization system using Microsoft Direct3D.

    PubMed

    Park, Chan-Yong; Park, Sung-Hee; Park, Soo-Jun; Park, Sun-Hee; Hwang, Chi-Jung

    2008-09-01

    Many tools have been developed to visualize protein and molecular structures. Most high quality protein visualization tools use the OpenGL graphics library as a 3D graphics system. Currently, the performance of recent 3D graphics hardware has rapidly improved. Recent high-performance 3D graphics hardware support Microsoft Direct3D graphics library more than OpenGL and have become very popular in personal computers (PCs). In this paper, a molecular visualization system termed ProteinVista is proposed. ProteinVista is well-designed visualization system using the Microsoft Direct3D graphics library. It provides various visualization styles such as the wireframe, stick, ball and stick, space fill, ribbon, and surface model styles, in addition to display options for 3D visualization. As ProteinVista is optimized for recent 3D graphics hardware platforms and because it uses a geometry instancing technique, its rendering speed is 2.7 times faster compared to other visualization tools.

  16. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size. PMID:25961382

  17. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays.

    PubMed

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-05-07

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size.

  18. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  19. 3D optical measuring technologies and systems for industrial applications

    NASA Astrophysics Data System (ADS)

    Chugui, Yu. V.

    2005-06-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method, development of hole inspection method on the base of diffractive optical elements. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100 % noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, RADAR, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  20. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  1. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  2. 3D Geological Model for "LUSI" - a Deep Geothermal System

    NASA Astrophysics Data System (ADS)

    Sohrabi, Reza; Jansen, Gunnar; Mazzini, Adriano; Galvan, Boris; Miller, Stephen A.

    2016-04-01

    Geothermal applications require the correct simulation of flow and heat transport processes in porous media, and many of these media, like deep volcanic hydrothermal systems, host a certain degree of fracturing. This work aims to understand the heat and fluid transport within a new-born sedimentary hosted geothermal system, termed Lusi, that began erupting in 2006 in East Java, Indonesia. Our goal is to develop conceptual and numerical models capable of simulating multiphase flow within large-scale fractured reservoirs such as the Lusi region, with fractures of arbitrary size, orientation and shape. Additionally, these models can also address a number of other applications, including Enhanced Geothermal Systems (EGS), CO2 sequestration (Carbon Capture and Storage CCS), and nuclear waste isolation. Fractured systems are ubiquitous, with a wide-range of lengths and scales, making difficult the development of a general model that can easily handle this complexity. We are developing a flexible continuum approach with an efficient, accurate numerical simulator based on an appropriate 3D geological model representing the structure of the deep geothermal reservoir. Using previous studies, borehole information and seismic data obtained in the framework of the Lusi Lab project (ERC grant n°308126), we present here the first 3D geological model of Lusi. This model is calculated using implicit 3D potential field or multi-potential fields, depending on the geological context and complexity. This method is based on geological pile containing the geological history of the area and relationship between geological bodies allowing automatic computation of intersections and volume reconstruction. Based on the 3D geological model, we developed a new mesh algorithm to create hexahedral octree meshes to transfer the structural geological information for 3D numerical simulations to quantify Thermal-Hydraulic-Mechanical-Chemical (THMC) physical processes.

  3. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ.

    PubMed

    Wu, Bing; Klatzky, Roberta L; Stetten, George

    2010-03-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.

  4. Real-time 3D human capture system for mixed-reality art and entertainment.

    PubMed

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  5. Advanced system for 3D dental anatomy reconstruction and 3D tooth movement simulation during orthodontic treatment

    NASA Astrophysics Data System (ADS)

    Monserrat, Carlos; Alcaniz-Raya, Mariano L.; Juan, M. Carmen; Grau Colomer, Vincente; Albalat, Salvador E.

    1997-05-01

    This paper describes a new method for 3D orthodontics treatment simulation developed for an orthodontics planning system (MAGALLANES). We develop an original system for 3D capturing and reconstruction of dental anatomy that avoid use of dental casts in orthodontic treatments. Two original techniques are presented, one direct in which data are acquired directly form patient's mouth by mean of low cost 3D digitizers, and one mixed in which data are obtained by 3D digitizing of hydrocollids molds. FOr this purpose we have designed and manufactured an optimized optical measuring system based on laser structured light. We apply these 3D dental models to simulate 3D movement of teeth, including rotations, during orthodontic treatment. The proposed algorithms enable to quantify the effect of orthodontic appliance on tooth movement. The developed techniques has been integrated in a system named MAGALLANES. This original system present several tools for 3D simulation and planning of orthodontic treatments. The prototype system has been tested in several orthodontic clinic with very good results.

  6. 3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation.

    PubMed

    Yeom, Han-Ju; Kim, Hee-Jae; Kim, Seong-Bok; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo; Park, Jae-Hyeung

    2015-12-14

    We propose a bar-type three-dimensional holographic head mounted display using two holographic optical elements. Conventional stereoscopic head mounted displays may suffer from eye fatigue because the images presented to each eye are two-dimensional ones, which causes mismatch between the accommodation and vergence responses of the eye. The proposed holographic head mounted display delivers three-dimensional holographic images to each eye, removing the eye fatigue problem. In this paper, we discuss the configuration of the bar-type waveguide head mounted displays and analyze the aberration caused by the non-symmetric diffraction angle of the holographic optical elements which are used as input and output couplers. Pre-distortion of the hologram is also proposed in the paper to compensate the aberration. The experimental results show that proposed head mounted display can present three-dimensional see-through holographic images to each eye with correct focus cues.

  7. 3D gel printing for soft-matter systems innovation

    NASA Astrophysics Data System (ADS)

    Furukawa, Hidemitsu; Kawakami, Masaru; Gong, Jin; Makino, Masato; Kabir, M. Hasnat; Saito, Azusa

    2015-04-01

    In the past decade, several high-strength gels have been developed, especially from Japan. These gels are expected to use as a kind of new engineering materials in the fields of industry and medical as substitutes to polyester fibers, which are materials of artificial blood vessels. We consider if various gel materials including such high-strength gels are 3D-printable, many new soft and wet systems will be developed since the most intricate shape gels can be printed regardless of the quite softness and brittleness of gels. Recently we have tried to develop an optical 3D gel printer to realize the free-form formation of gel materials. We named this apparatus Easy Realizer of Soft and Wet Industrial Materials (SWIM-ER). The SWIM-ER will be applied to print bespoke artificial organs, including artificial blood vessels, which will be possibly used for both surgery trainings and actual surgery. The SWIM-ER can print one of the world strongest gels, called Double-Network (DN) gels, by using UV irradiation through an optical fiber. Now we also are developing another type of 3D gel printer for foods, named E-Chef. We believe these new 3D gel printers will broaden the applications of soft-matter gels.

  8. 3D printed nervous system on a chip.

    PubMed

    Johnson, Blake N; Lancaster, Karen Z; Hogue, Ian B; Meng, Fanben; Kong, Yong Lin; Enquist, Lynn W; McAlpine, Michael C

    2016-04-21

    Bioinspired organ-level in vitro platforms are emerging as effective technologies for fundamental research, drug discovery, and personalized healthcare. In particular, models for nervous system research are especially important, due to the complexity of neurological phenomena and challenges associated with developing targeted treatment of neurological disorders. Here we introduce an additive manufacturing-based approach in the form of a bioinspired, customizable 3D printed nervous system on a chip (3DNSC) for the study of viral infection in the nervous system. Micro-extrusion 3D printing strategies enabled the assembly of biomimetic scaffold components (microchannels and compartmented chambers) for the alignment of axonal networks and spatial organization of cellular components. Physiologically relevant studies of nervous system infection using the multiscale biomimetic device demonstrated the functionality of the in vitro platform. We found that Schwann cells participate in axon-to-cell viral spread but appear refractory to infection, exhibiting a multiplicity of infection (MOI) of 1.4 genomes per cell. These results suggest that 3D printing is a valuable approach for the prototyping of a customized model nervous system on a chip technology.

  9. 3D Printed Nervous System on a Chip

    PubMed Central

    Johnson, Blake N.; Lancaster, Karen Z.; Hogue, Ian B.; Meng, Fanben; Kong, Yong Lin; Enquist, Lynn W.; McAlpine, Michael C.

    2015-01-01

    Bioinspired organ-level in vitro platforms are emerging as effective technologies for fundamental research, drug discovery, and personalized healthcare. In particular, models for nervous system research are especially important, due to the complexity of neurological phenomena and challenges associated with developing targeted treatment of neurological disorders. Here we introduce an additive manufacturing-based approach in the form of a bioinspired, customizable 3D printed nervous system on a chip (3DNSC) for the study of viral infection in the nervous system. Micro-extrusion 3D printing strategies enabled the assembly of biomimetic scaffold components (microchannels and compartmented chambers) for the alignment of axonal networks and spatial organization of cellular components. Physiologically relevant studies of nervous system infection using the multiscale biomimetic device demonstrated the functionality of the in vitro platform. We found that Schwann cells participate in axon-to-cell viral spread but appear refractory to infection, exhibiting a multiplicity of infection (MOI) of 1.4 genomes per cell. These results suggest that 3D printing is a valuable approach for the prototyping of a customized model nervous system on a chip technology. PMID:26669842

  10. Advancements in 3D Structural Analysis of Geothermal Systems

    SciTech Connect

    Siler, Drew L; Faulds, James E; Mayhew, Brett; McNamara, David

    2013-06-23

    Robust geothermal activity in the Great Basin, USA is a product of both anomalously high regional heat flow and active fault-controlled extension. Elevated permeability associated with some fault systems provides pathways for circulation of geothermal fluids. Constraining the local-scale 3D geometry of these structures and their roles as fluid flow conduits is crucial in order to mitigate both the costs and risks of geothermal exploration and to identify blind (no surface expression) geothermal resources. Ongoing studies have indicated that much of the robust geothermal activity in the Great Basin is associated with high density faulting at structurally complex fault intersection/interaction areas, such as accommodation/transfer zones between discrete fault systems, step-overs or relay ramps in fault systems, intersection zones between faults with different strikes or different senses of slip, and horse-tailing fault terminations. These conceptualized models are crucial for locating and characterizing geothermal systems in a regional context. At the local scale, however, pinpointing drilling targets and characterizing resource potential within known or probable geothermal areas requires precise 3D characterization of the system. Employing a variety of surface and subsurface data sets, we have conducted detailed 3D geologic analyses of two Great Basin geothermal systems. Using EarthVision (Dynamic Graphics Inc., Alameda, CA) we constructed 3D geologic models of both the actively producing Brady’s geothermal system and a ‘greenfield’ geothermal prospect at Astor Pass, NV. These 3D models allow spatial comparison of disparate data sets in 3D and are the basis for quantitative structural analyses that can aid geothermal resource assessment and be used to pinpoint discrete drilling targets. The relatively abundant data set at Brady’s, ~80 km NE of Reno, NV, includes 24 wells with lithologies interpreted from careful analysis of cuttings and core, a 1

  11. IGUANA: a high-performance 2D and 3D visualisation system

    NASA Astrophysics Data System (ADS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L. A.

    2004-11-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  12. Effective declutter of complex flight displays using stereoptic 3-D cueing

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.; Nold, Dean E.

    1994-01-01

    The application of stereo technology to new, integrated pictorial display formats has been effective in situational awareness enhancements, and stereo has been postulated to be effective for the declutter of complex informational displays. This paper reports a full-factorial workstation experiment performed to verify the potential benefits of stereo cueing for the declutter function in a simulated tracking task. The experimental symbology was designed similar to that of a conventional flight director, although the format was an intentionally confused presentation that resulted in a very cluttered dynamic display. The subject's task was to use a hand controller to keep a tracking symbol, an 'X', on top of a target symbol, another X, which was being randomly driven. In the basic tracking task, both the target symbol and the tracking symbol were presented as red X's. The presence of color coding was used to provide some declutter, thus making the task more reasonable to perform. For this condition, the target symbol was coded red, and the tracking symbol was coded blue. Noise conditions, or additional clutter, were provided by the inclusion of randomly moving, differently colored X symbols. Stereo depth, which was hypothesized to declutter the display, was utilized by placing any noise in a plane in front of the display monitor, the tracking symbol at screen depth, and the target symbol behind the screen. The results from analyzing the performances of eight subjects revealed that the stereo presentation effectively offsets the cluttering effects of both the noise and the absence of color coding. The potential of stereo cueing to declutter complex informational displays has therefore been verified; this ability to declutter is an additional benefit from the application of stereoptic cueing to pictorial flight displays.

  13. Seamless tiled display system

    NASA Technical Reports Server (NTRS)

    Dubin, Matthew B. (Inventor); Larson, Brent D. (Inventor); Kolosowsky, Aleksandra (Inventor)

    2006-01-01

    A modular and scalable seamless tiled display apparatus includes multiple display devices, a screen, and multiple lens assemblies. Each display device is subdivided into multiple sections, and each section is configured to display a sectional image. One of the lens assemblies is optically coupled to each of the sections of each of the display devices to project the sectional image displayed on that section onto the screen. The multiple lens assemblies are configured to merge the projected sectional images to form a single tiled image. The projected sectional images may be merged on the screen by magnifying and shifting the images in an appropriate manner. The magnification and shifting of these images eliminates any visual effect on the tiled display that may result from dead-band regions defined between each pair of adjacent sections on each display device, and due to gaps between multiple display devices.

  14. A 3D Split Manufacturing Approach to Trustworthy System Development

    DTIC Science & Technology

    2012-12-01

    Acıiçmez, J.P. Seifert, and C.K. Koc. Micro -architectural cryptanalysis. IEEE Security and Privacy Magazine, 5(4), July-August 2007. [4] Daniel J...International Symposium on Microarchitecture ( MICRO ), Orlando, FL, December 2006. VALAMEHR et al.: A 3D SPLIT MANUFACTURING APPROACH TO TRUSTWORTHY SYSTEM...IEEE Micro , 27(3), May-June 2007. [16] Gian Luca Loi, Banit Agrawal, Navin Srivastava, Sheng-Chih Lin, Timothy Sherwood, and Kaustav Banerjee. A

  15. 3D temperature field reconstruction using ultrasound sensing system

    NASA Astrophysics Data System (ADS)

    Liu, Yuqian; Ma, Tong; Cao, Chengyu; Wang, Xingwei

    2016-04-01

    3D temperature field reconstruction is of practical interest to the power, transportation and aviation industries and it also opens up opportunities for real time control or optimization of high temperature fluid or combustion process. In our paper, a new distributed optical fiber sensing system consisting of a series of elements will be used to generate and receive acoustic signals. This system is the first active temperature field sensing system that features the advantages of the optical fiber sensors (distributed sensing capability) and the acoustic sensors (non-contact measurement). Signals along multiple paths will be measured simultaneously enabled by a code division multiple access (CDMA) technique. Then a proposed Gaussian Radial Basis Functions (GRBF)-based approach can approximate the temperature field as a finite summation of space-dependent basis functions and time-dependent coefficients. The travel time of the acoustic signals depends on the temperature of the media. On this basis, the Gaussian functions are integrated along a number of paths which are determined by the number and distribution of sensors. The inversion problem to estimate the unknown parameters of the Gaussian functions can be solved with the measured times-of-flight (ToF) of acoustic waves and the length of propagation paths using the recursive least square method (RLS). The simulation results show an approximation error less than 2% in 2D and 5% in 3D respectively. It demonstrates the availability and efficiency of our proposed 3D temperature field reconstruction mechanism.

  16. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  17. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  18. A semi-automatic 3D laser scan system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2009-11-01

    Digital 3D models are now used everywhere, from traditional fields of industrial design, artistic design, to heritage conservation. Although laser scan is very useful to get densely samples of the objects, nowadays, such an instrument is expensive and always need to be connected to a computer with stable power supply, which prevent it from usage for fieldworks. In this paper, a new semi-automatic 3D laser scan method is proposed using two line laser sources. The planes projected from the laser sources are orthogonal, one of which is fixed relative to the camera, and the other can be rotated along a settled axis. Before scanning, the system must be calibrated, from which the parameters of the camera, the position of the fixed laser plane and the settled axis are introduced. In scanning process, the fixed laser plane and the camera form a conventional structured light system, and the 3d positions of the intersection curves of the fixed laser plane with the object can be computed. The other laser plane is rotated manually or mechanically, and its position can be determined from the cross point intersecting with the fixed laser plane on the object, so the coordinates of sweeping points can be obtained. The new system can be used without a computer (The data can be processed later), which make it suitable for fieldworks. A scanning case is given in the end.

  19. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  20. 3-D Displays Perceptual Research and Applications to Military Systems

    DTIC Science & Technology

    1982-09-30

    Human Factors who shared over the months the burden of organizing the conference. Finally, I thank the symposium participants for theiv contributions, all...i• IIN Fox V--A ~2 \\ ’ -0 - ’’ CA 200 Fox z 0 zz 0L OIL ɜ. Cd 00 <~ 0- w - 0 -𔃺 4J"_ Ins 10 4)". c= 0 0 Z~ LL 04 0 U) 0 4.1 0Řu 0 0~ lot.) 3an-I N...corresponding functions in Experiment 1 (Figure 4). The prediction for these 40 Huggins & Getty 3S LEGEND 9 RIGHT S"-.a UOO a FARIa UNEAR E 7000 E00

  1. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  2. Development of a color 3D display visible to plural viewers at the same time without special glasses by using a ray-regenerating method

    NASA Astrophysics Data System (ADS)

    Hamagishi, Goro; Ando, Takahisa; Higashino, Masahiro; Yamashita, Atsuhiro; Mashitani, Ken; Inoue, Masutaka; Kishimoto, Shun-Ichi; Kobayashi, Tetsuro

    2002-05-01

    We have newly developed a few kinds of new auto-stereoscopic 3D displays adopting a ray-regenerating method. The method is invented basically at Osaka University in 1997. We adopted this method with LCD. The display has a very simple construction. It consists of LC panel with a very large number of pixels and many small light sources positioned behind the LC panel. We have examined the following new technologies: 1) Optimum design of the optical system. 2) Suitable construction in order to realize very large number of pixels. 3) Highly bright back-light system with optical fiber array to compensate the low lighting efficiency. The 3D displays having wide viewing area and being visible for plural viewers were realized. But the cross-talk images appeared more than we expected. By changing the construction of this system to reduce the diffusing factors of generated rays, the cross-talk images are reduced dramatically. Within the limitation of the pixel numbers of LCD, it is desirable to increase the pinhole numbers to realize the realistic 3D image. This research formed a link in the chain of the national project by NEDO (New Energy and Industrial Technology Development Organization) in Japan.

  3. Cytoplasmic bacteriophage display system

    DOEpatents

    Studier, F.W.; Rosenberg, A.H.

    1998-06-16

    Disclosed are display vectors comprising DNA encoding a portion of a structural protein from a cytoplasmic bacteriophage, joined covalently to a protein or peptide of interest. Exemplified are display vectors wherein the structural protein is the T7 bacteriophage capsid protein. More specifically, in the exemplified display vectors the C-terminal amino acid residue of the portion of the capsid protein is joined to the N-terminal residue of the protein or peptide of interest. The portion of the T7 capsid protein exemplified comprises an N-terminal portion corresponding to form 10B of the T7 capsid protein. The display vectors are useful for high copy number display or lower copy number display (with larger fusion). Compositions of the type described herein are useful in connection with methods for producing a virus displaying a protein or peptide of interest. 1 fig.

  4. Cytoplasmic bacteriophage display system

    DOEpatents

    Studier, F. William; Rosenberg, Alan H.

    1998-06-16

    Disclosed are display vectors comprising DNA encoding a portion of a structural protein from a cytoplasmic bacteriophage, joined covalently to a protein or peptide of interest. Exemplified are display vectors wherein the structural protein is the T7 bacteriophage capsid protein. More specifically, in the exemplified display vectors the C-terminal amino acid residue of the portion of the capsid protein is joined to the N-terminal residue of the protein or peptide of interest. The portion of the T7 capsid protein exemplified comprises an N-terminal portion corresponding to form 10B of the T7 capsid protein. The display vectors are useful for high copy number display or lower copy number display (with larger fusion). Compositions of the type described herein are useful in connection with methods for producing a virus displaying a protein or peptide of interest.

  5. Digital acquisition system for high-speed 3-D imaging

    NASA Astrophysics Data System (ADS)

    Yafuso, Eiji

    1997-11-01

    High-speed digital three-dimensional (3-D) imagery is possible using multiple independent charge-coupled device (CCD) cameras with sequentially triggered acquisition and individual field storage capability. The system described here utilizes sixteen independent cameras, providing versatility in configuration and image acquisition. By aligning the cameras in nearly coincident lines-of-sight, a sixteen frame two-dimensional (2-D) sequence can be captured. The delays can be individually adjusted lo yield a greater number of acquired frames during the more rapid segments of the event. Additionally, individual integration periods may be adjusted to ensure adequate radiometric response while minimizing image blur. An alternative alignment and triggering scheme arranges the cameras into two angularly separated banks of eight cameras each. By simultaneously triggering correlated stereo pairs, an eight-frame sequence of stereo images may be captured. In the first alignment scheme the camera lines-of-sight cannot be made precisely coincident. Thus representation of the data as a monocular sequence introduces the issue of independent camera coordinate registration with the real scene. This issue arises more significantly using the stereo pair method to reconstruct quantitative 3-D spatial information of the event as a function of time. The principal development here will be the derivation and evaluation of a solution transform and its inverse for the digital data which will yield a 3-D spatial mapping as a function of time.

  6. Intersecting D 3 -D3 ' -brane system at finite temperature

    NASA Astrophysics Data System (ADS)

    Cottrell, William; Hanson, James; Hashimoto, Akikazu; Loveridge, Andrew; Pettengill, Duncan

    2017-02-01

    We analyze the dynamics of the intersecting D 3 -D3 ' -brane system overlapping in 1 +1 dimensions, in a holographic treatment where N D3 branes are manifested as anti-de Sitter Schwartzschild geometry, and the D3 ' brane is treated as a probe. We extract the thermodynamic equation of state from the set of embedding solutions, and analyze the stability at the perturbative and the nonperturbative level. We review a systematic procedure to resolve local instabilities and multivaluedness in the equations of state based on classic ideas of convexity in the microcanonical ensemble. We then identify a runaway behavior which was not noticed previously for this system.

  7. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  8. An approach to 3D model fusion in GIS systems and its application in a future ECDIS

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Zhao, Depeng; Pan, Mingyang

    2016-04-01

    Three-dimensional (3D) computer graphics technology is widely used in various areas and causes profound changes. As an information carrier, 3D models are becoming increasingly important. The use of 3D models greatly helps to improve the cartographic expression and design. 3D models are more visually efficient, quicker and easier to understand and they can express more detailed geographical information. However, it is hard to efficiently and precisely fuse 3D models in local systems. The purpose of this study is to propose an automatic and precise approach to fuse 3D models in geographic information systems (GIS). It is the basic premise for subsequent uses of 3D models in local systems, such as attribute searching, spatial analysis, and so on. The basic steps of our research are: (1) pose adjustment by principal component analysis (PCA); (2) silhouette extraction by simple mesh silhouette extraction and silhouette merger; (3) size adjustment; (4) position matching. Finally, we implement the above methods in our system Automotive Intelligent Chart (AIC) 3D Electronic Chart Display and Information Systems (ECDIS). The fusion approach we propose is a common method and each calculation step is carefully designed. This approach solves the problem of cross-platform model fusion. 3D models can be from any source. They may be stored in the local cache or retrieved from Internet, or may be manually created by different tools or automatically generated by different programs. The system can be any kind of 3D GIS system.

  9. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright

    PubMed Central

    Kane, David; Held, Robert T.; Banks, Martin S.

    2012-01-01

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer’s inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer’s head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer’s head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays. PMID:24058723

  10. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright.

    PubMed

    Kane, David; Held, Robert T; Banks, Martin S

    2012-02-09

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer's inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer's head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer's head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays.

  11. The crystal structure of the dimeric colicin M immunity protein displays a 3D domain swap.

    PubMed

    Usón, Isabel; Patzer, Silke I; Rodríguez, Dayté Dayana; Braun, Volkmar; Zeth, Kornelius

    2012-04-01

    Bacteriocins are proteins secreted by many bacterial cells to kill related bacteria of the same niche. To avoid their own suicide through reuptake of secreted bacteriocins, these bacteria protect themselves by co-expression of immunity proteins in the compartment of colicin destination. In Escherichia coli the colicin M (Cma) is inactivated by the interaction with the Cma immunity protein (Cmi). We have crystallized and solved the structure of Cmi at a resolution of 1.95Å by the recently developed ab initio phasing program ARCIMBOLDO. The monomeric structure of the mature 10kDa protein comprises a long N-terminal α-helix and a four-stranded C-terminal β-sheet. Dimerization of this fold is mediated by an extended interface of hydrogen bond interactions between the α-helix and the four-stranded β-sheet of the symmetry related molecule. Two intermolecular disulfide bridges covalently connect this dimer to further lock this complex. The Cmi protein resembles an example of a 3D domain swapping being stalled through physical linkage. The dimer is a highly charged complex with a significant surplus of negative charges presumably responsible for interactions with Cma. Dimerization of Cmi was also demonstrated to occur in vivo. Although the Cmi-Cma complex is unique among bacteria, the general fold of Cmi is representative for a class of YebF-like proteins which are known to be secreted into the external medium by some Gram-negative bacteria.

  12. 3-d Periodic Packaging: Sodalite, a Model System

    DTIC Science & Technology

    1992-05-15

    to 05-31-92 4. TITLE AND SUBTITLE S. FUNDING NUMBERS 3-d Periodic Packaging: N00014-90-J-1159 Sodalite , A Model System 6. AUTHOR(S) G.D. Stucky, V.I...assembly of confined atomic and molecular arrays. Sodalite , one of the simplest zeolite analogue structures with a 60 atom cage can be synthesized with...structure of both the frameworks and the clusters within the cages of sodalite structural analogues can be precisely determined. In addition to new

  13. 3-D Periodic Packaging: Sodalite, a Model System

    DTIC Science & Technology

    1992-05-15

    hfww 05-15-92 Technical 06-1-91 o 05-31-92 ,mA AMU SUBSTIl SI. FUNDING NUMBUS 3-d Periodic Packaging: Sodalite , A Model System N00014-81-K-0598 AUTNO(S...considerable latitude in the assembly of confined atomic and molecular arrays. Sodalite , one of the simplest zeolite analogue structures with a 60 atom...framework electric field. The structure of both the fiameworks and the clusters within the cages of sodalite structural analogues can be precisely

  14. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and

  15. DVE flight test results of a sensor enhanced 3D conformal pilot support system

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick

    2015-06-01

    The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.

  16. Large LED screen 3D television system without eyewear

    NASA Astrophysics Data System (ADS)

    Nishida, Nobuo; Yamamoto, Hirotsugu; Hayasaki, Yoshio

    2004-10-01

    Since the development of high-brightness blue and green LEDs, the use of outdoor commercial LED displays has been increasing. Because of their high brightness, good visibility, and long-term durability to the weather, LED displays are a preferred technology for outdoor installations such as stadiums, street advertising, and billboards. This paper deals with a large stereoscopic full-color LED display by use of a parallax barrier. We discuss optimization of the viewing area, which depends on LED arrangements. An enlarged viewing area has been demonstrated by using a 3-in-1 chip LED panel that has wider black regions than ordinary LED lamp cluster panels. We have developed a real-time measurement system of a viewer's position and utilized the measurement system for evaluation of performance of the different designs of stereoscopic LED displays, including conventional designs to provide multiple perspective images and designs to eliminate pseudoscopic viewing areas. In order to show real-world images, it is necessary to capture stereo-images, to process them, and to show in real-time. We have developed an active binocular camera and demonstrated the real-time display of stereoscopic movies and real-time control of convergence.

  17. 3D in vitro modeling of the central nervous system

    PubMed Central

    Hopkins, Amy M.; DeSimone, Elise; Chwalek, Karolina; Kaplan, David L.

    2015-01-01

    There are currently more than 600 diseases characterized as affecting the central nervous system (CNS) which inflict neural damage. Unfortunately, few of these conditions have effective treatments available. Although significant efforts have been put into developing new therapeutics, drugs which were promising in the developmental phase have high attrition rates in late stage clinical trials. These failures could be circumvented if current 2D in vitro and in vivo models were improved. 3D, tissue-engineered in vitro systems can address this need and enhance clinical translation through two approaches: (1) bottom-up, and (2) top-down (developmental/regenerative) strategies to reproduce the structure and function of human tissues. Critical challenges remain including biomaterials capable of matching the mechanical properties and extracellular matrix (ECM) composition of neural tissues, compartmentalized scaffolds that support heterogeneous tissue architectures reflective of brain organization and structure, and robust functional assays for in vitro tissue validation. The unique design parameters defined by the complex physiology of the CNS for construction and validation of 3D in vitro neural systems are reviewed here. PMID:25461688

  18. 3D in vitro modeling of the central nervous system.

    PubMed

    Hopkins, Amy M; DeSimone, Elise; Chwalek, Karolina; Kaplan, David L

    2015-02-01

    There are currently more than 600 diseases characterized as affecting the central nervous system (CNS) which inflict neural damage. Unfortunately, few of these conditions have effective treatments available. Although significant efforts have been put into developing new therapeutics, drugs which were promising in the developmental phase have high attrition rates in late stage clinical trials. These failures could be circumvented if current 2D in vitro and in vivo models were improved. 3D, tissue-engineered in vitro systems can address this need and enhance clinical translation through two approaches: (1) bottom-up, and (2) top-down (developmental/regenerative) strategies to reproduce the structure and function of human tissues. Critical challenges remain including biomaterials capable of matching the mechanical properties and extracellular matrix (ECM) composition of neural tissues, compartmentalized scaffolds that support heterogeneous tissue architectures reflective of brain organization and structure, and robust functional assays for in vitro tissue validation. The unique design parameters defined by the complex physiology of the CNS for construction and validation of 3D in vitro neural systems are reviewed here.

  19. Handheld camera 3D modeling system using multiple reference panels

    NASA Astrophysics Data System (ADS)

    Fujimura, Kouta; Oue, Yasuhiro; Terauchi, Tomoya; Emi, Tetsuichi

    2002-03-01

    A novel 3D modeling system in which a target object is easily captured and modeled by using a hand-held camera with several reference panels is presented in this paper. The reference panels are designed to be able to obtain the camera position and discriminate between each other. A conventional 3D modeling system using a reference panel has several restrictions regarding the target object, specifically the size and its location. Our system uses multiple reference panels, which are set around the target object to remove these restrictions. The main features of this system are as follows: 1) The whole shape and photo-realistic textures of the target object can be digitized based on several still images or a movie captured by using a hand-held camera; as well as each location of the camera that can be calculated using the reference panels. 2) Our system can be provided as a software product only. That means there are no special requirements for hardware; even the reference panels , because they can be printed from image files or software. 3) This system can be applied to digitize a larger object. In the experiments, we developed and used an interactive region selection tool to detect the silhouette on each image instead of using the chroma -keying method. We have tested our system with a toy object. The calculation time is about 10 minutes (except for the capturing the images and extracting the silhouette by using our tool) on a personal computer with a Pentium-III processor (600MHz) and 320MB memory. However, it depends on how complex the images are and how many images you use. Our future plan is to evaluate the system with various kind of objects, specifically, large ones in outdoor environments.

  20. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  1. Inertial Pocket Navigation System: Unaided 3D Positioning

    PubMed Central

    Munoz Diaz, Estefania

    2015-01-01

    Inertial navigation systems use dead-reckoning to estimate the pedestrian's position. There are two types of pedestrian dead-reckoning, the strapdown algorithm and the step-and-heading approach. Unlike the strapdown algorithm, which consists of the double integration of the three orthogonal accelerometer readings, the step-and-heading approach lacks the vertical displacement estimation. We propose the first step-and-heading approach based on unaided inertial data solving 3D positioning. We present a step detector for steps up and down and a novel vertical displacement estimator. Our navigation system uses the sensor introduced in the front pocket of the trousers, a likely location of a smartphone. The proposed algorithms are based on the opening angle of the leg or pitch angle. We analyzed our step detector and compared it with the state-of-the-art, as well as our already proposed step length estimator. Lastly, we assessed our vertical displacement estimator in a real-world scenario. We found that our algorithms outperform the literature step and heading algorithms and solve 3D positioning using unaided inertial data. Additionally, we found that with the pitch angle, five activities are distinguishable: standing, sitting, walking, walking up stairs and walking down stairs. This information complements the pedestrian location and is of interest for applications, such as elderly care. PMID:25897501

  2. Developmental neurotoxic effects of Malathion on 3D neurosphere system

    PubMed Central

    Salama, Mohamed; Lotfy, Ahmed; Fathy, Khaled; Makar, Maria; El-emam, Mona; El-gamal, Aya; El-gamal, Mohamed; Badawy, Ahmad; Mohamed, Wael M.Y.; Sobh, Mohamed

    2015-01-01

    Developmental neurotoxicity (DNT) refers to the toxic effects induced by various chemicals on brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have significant effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS; however, most of agents cannot be identified with certainty. This is because available animal models do not cover the whole spectrum of CNS developmental periods. A novel alternative method that can overcome most of the limitations of the conventional techniques is the use of 3D neurosphere system. This in-vitro system can recapitulate many of the changes during the period of brain development making it an ideal model for predicting developmental neurotoxic effects. In the present study we verified the possible DNT of Malathion, which is one of organophosphate pesticides with suggested possible neurotoxic effects on nursing children. Three doses of Malathion (0.25 μM, 1 μM and 10 μM) were used in cultured neurospheres for a period of 14 days. Malathion was found to affect proliferation, differentiation and viability of neurospheres, these effects were positively correlated to doses and time progress. This study confirms the DNT effects of Malathion on 3D neurosphere model. Further epidemiological studies will be needed to link these results to human exposure and effects data. PMID:27054080

  3. Magnetism in a graphene-4 f -3 d hybrid system

    NASA Astrophysics Data System (ADS)

    Huttmann, Felix; Klar, David; Atodiresei, Nicolae; Schmitz-Antoniak, Carolin; Smekhova, Alevtina; Martínez-Galera, Antonio J.; Caciuc, Vasile; Bihlmayer, Gustav; Blügel, Stefan; Michely, Thomas; Wende, Heiko

    2017-02-01

    We create an interface of graphene with a metallic and magnetic support that leaves its electronic structure largely intact. This is achieved by exposing epitaxial graphene on ferromagnetic thin films of Co and Ni to vapor of the rare earth metal Eu at elevated temperatures, resulting in the intercalation of an Eu monolayer in between graphene and its substrate. The system is atomically well defined, with the Eu monolayer forming a (√{3 }×√{3 }) R 30∘ superstructure with respect to the graphene lattice. Thereby, we avoid the strong hybridization with the (Ni,Co) substrate 3 d states that otherwise drastically modify the electronic structure of graphene. This picture is suggested by our x-ray absorption spectroscopy measurements which show that after Eu intercalation the empty 2 p states of C atoms resemble more the ones measured for graphite in contrast to graphene directly bound to 3 d ferromagnetic substrates. We use x-ray magnetic circular dichroism at the Co and Ni L2 ,3 and Eu M4 ,5 as an element-specific probe to investigate magnetism in these systems. An antiferromagnetic coupling between Eu and Co/Ni moments is found, which is so strong that a magnetic moment of the Eu layer can be detected at room temperature. Density functional theory calculations confirm the antiferromagnetic coupling and provide an atomic insight into the magnetic coupling mechanism.

  4. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    PubMed Central

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-01-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371

  5. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-03-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.

  6. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    NASA Astrophysics Data System (ADS)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  7. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    NASA Astrophysics Data System (ADS)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite

  8. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  9. The 3-D vision system integrated dexterous hand

    NASA Technical Reports Server (NTRS)

    Luo, Ren C.; Han, Youn-Sik

    1989-01-01

    Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.

  10. Hybrid additive manufacturing of 3D electronic systems

    NASA Astrophysics Data System (ADS)

    Li, J.; Wasley, T.; Nguyen, T. T.; Ta, V. D.; Shephard, J. D.; Stringer, J.; Smith, P.; Esenturk, E.; Connaughton, C.; Kay, R.

    2016-10-01

    A novel hybrid additive manufacturing (AM) technology combining digital light projection (DLP) stereolithography (SL) with 3D micro-dispensing alongside conventional surface mount packaging is presented in this work. This technology overcomes the inherent limitations of individual AM processes and integrates seamlessly with conventional packaging processes to enable the deposition of multiple materials. This facilitates the creation of bespoke end-use products with complex 3D geometry and multi-layer embedded electronic systems. Through a combination of four-point probe measurement and non-contact focus variation microscopy, it was identified that there was no obvious adverse effect of DLP SL embedding process on the electrical conductivity of printed conductors. The resistivity maintained to be less than 4  ×  10-4 Ω · cm before and after DLP SL embedding when cured at 100 °C for 1 h. The mechanical strength of SL specimens with thick polymerized layers was also identified through tensile testing. It was found that the polymerization thickness should be minimised (less than 2 mm) to maximise the bonding strength. As a demonstrator a polymer pyramid with embedded triple-layer 555 LED blinking circuitry was successfully fabricated to prove the technical viability.

  11. Modeling moving systems with RELAP5-3D

    SciTech Connect

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; Kyle, Matt R.

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the accelerating frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.

  12. Modeling moving systems with RELAP5-3D

    DOE PAGES

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; ...

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  13. 3D Additive Construction with Regolith for Surface Systems

    NASA Technical Reports Server (NTRS)

    Mueller, Robert P.

    2014-01-01

    Planetary surface exploration on Asteroids, the Moon, Mars and Martian Moons will require the stabilization of loose, fine, dusty regolith to avoid the effects of vertical lander rocket plume impingement, to keep abrasive and harmful dust from getting lofted and for dust free operations. In addition, the same regolith stabilization process can be used for 3 Dimensional ( 3D) printing, additive construction techniques by repeating the 2D stabilization in many vertical layers. This will allow in-situ construction with regolith so that materials will not have to be transported from Earth. Recent work in the NASA Kennedy Space Center (KSC) Surface Systems Office (NE-S) Swamp Works and at the University of Southern California (USC) under two NASA Innovative Advanced Concept (NIAC) awards have shown promising results with regolith (crushed basalt rock) materials for in-situ heat shields, bricks, landing/launch pads, berms, roads, and other structures that could be fabricated using regolith that is sintered or mixed with a polymer binder. The technical goals and objectives of this project are to prove the feasibility of 3D printing additive construction using planetary regolith simulants and to show that they have structural integrity and practical applications in space exploration.

  14. Tri-color composite volume H-PDLC grating and its application to 3D color autostereoscopic display.

    PubMed

    Wang, Kangni; Zheng, Jihong; Gao, Hui; Lu, Feiyue; Sun, Lijia; Yin, Stuart; Zhuang, Songlin

    2015-11-30

    A tri-color composite volume holographic polymer dispersed liquid crystal (H-PDLC) grating and its application to 3-dimensional (3D) color autostereoscopic display are reported in this paper. The composite volume H-PDLC grating consists of three different period volume H-PDLC sub-gratings. The longer period diffracts red light, the medium period diffracts the green light, and the shorter period diffracts the blue light. To record three different period gratings simultaneously, two photoinitiators are employed. The first initiator consists of methylene blue and p-toluenesulfonic acid and the second initiator is composed of Rose Bengal and N-phenyglycine. In this case, the holographic recording medium is sensitive to entire visible wavelengths, including red, green, and blue so that the tri-color composite grating can be written simultaneously by harnessing three different color laser beams. In the experiment, the red beam comes from a He-Ne laser with an output wavelength of 632.8 nm, the green beam comes from a Verdi solid state laser with an output wavelength of 532 nm, and the blue beam comes from a He-Cd laser with an output wavelength of 441.6 nm. The experimental results show that diffraction efficiencies corresponding to red, green, and blue colors are 57%, 75% and 33%, respectively. Although this diffraction efficiency is not perfect, it is high enough to demonstrate the effect of 3D color autostereoscopic display.

  15. Holographic 3D display observable for multiple simultaneous viewers from all horizontal directions by using a time division method.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2014-10-01

    A holographic three-dimensional display system with a viewing angle of 360°, by using a high-speed digital micromirror device (DMD), has been proposed. The wavefront modulated by the DMD enters a rotating mirror tilted vertically downward. The synchronization of the rotating mirror and holograms displayed on the DMD allows for the reconstruction of a wavefront propagating in all horizontal directions. An optical experiment has been demonstrated in order to verify our proposed system. Binocular vision is realized from anywhere within the horizontal plane. Our display system enables simultaneous observation by multiple viewers at an extremely close range.

  16. Sinusoidal phase modulating interferometry system for 3D profile measurement

    NASA Astrophysics Data System (ADS)

    En, Bo; Fa-jie, Duan; Chang-rong, Lv; Fu-kai, Zhang; Fan, Feng

    2014-07-01

    We describe a fiber-optic sinusoidal phase modulating (SPM) interferometer for three-dimensional (3D) profilometry, which is insensitive to external disturbances such as mechanical vibration and temperature fluctuation. Sinusoidal phase modulation is created by modulating the drive voltage of the piezoelectric transducer (PZT) with a sinusoidal wave. The external disturbances that cause phase drift in the interference signal and decrease measuring accuracy are effectively eliminated by building a closed-loop feedback system. The phase stability can be measured with a precision of 2.75 mrad, and the external disturbances can be reduced to 53.43 mrad for the phase of fringe patterns. By measuring the dynamic deformation of the rubber membrane, the RMSE is about 0.018 mm, and a single measurement takes less than 250 ms. The feasibility for real-time application has been verified.

  17. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  18. A single element 3D ultrasound tomography system.

    PubMed

    Xiang Zhang; Fincke, Jonathan; Kuzmin, Andrey; Lempitsky, Victor; Anthony, Brian

    2015-08-01

    Over the past decade, substantial effort has been directed toward developing ultrasonic systems for medical imaging. With advances in computational power, previously theorized scanning methods such as ultrasound tomography can now be realized. In this paper, we present the design, error analysis, and initial backprojection images from a single element 3D ultrasound tomography system. The system enables volumetric pulse-echo or transmission imaging of distal limbs. The motivating clinical applications include: improving prosthetic fittings, monitoring bone density, and characterizing muscle health. The system is designed as a flexible mechanical platform for iterative development of algorithms targeting imaging of soft tissue and bone. The mechanical system independently controls movement of two single element ultrasound transducers in a cylindrical water tank. Each transducer can independently circle about the center of the tank as well as move vertically in depth. High resolution positioning feedback (~1μm) and control enables flexible positioning of the transmitter and the receiver around the cylindrical tank; exchangeable transducers enable algorithm testing with varying transducer frequencies and beam geometries. High speed data acquisition (DAQ) through a dedicated National Instrument PXI setup streams digitized data directly to the host PC. System positioning error has been quantified and is within limits for the imaging requirements of the motivating applications.

  19. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  20. Multiplane binocular visual display system

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1976-01-01

    Electro-optic system is interfaced with digital computer in flight simulator to generate simultaneous multiple-image planes in real time. System may have applications with other display and remote-control systems.

  1. Repositioning accuracy of two different mask systems-3D revisited: Comparison using true 3D/3D matching with cone-beam CT

    SciTech Connect

    Boda-Heggemann, Judit . E-mail: judit.boda-heggemann@radonk.ma.uni-heidelberg.de; Walter, Cornelia; Rahn, Angelika; Wertz, Hansjoerg; Loeb, Iris; Lohr, Frank; Wenz, Frederik

    2006-12-01

    Purpose: The repositioning accuracy of mask-based fixation systems has been assessed with two-dimensional/two-dimensional or two-dimensional/three-dimensional (3D) matching. We analyzed the accuracy of commercially available head mask systems, using true 3D/3D matching, with X-ray volume imaging and cone-beam CT. Methods and Materials: Twenty-one patients receiving radiotherapy (intracranial/head-and-neck tumors) were evaluated (14 patients with rigid and 7 with thermoplastic masks). X-ray volume imaging was analyzed online and offline separately for the skull and neck regions. Translation/rotation errors of the target isocenter were analyzed. Four patients were treated to neck sites. For these patients, repositioning was aided by additional body tattoos. A separate analysis of the setup error on the basis of the registration of the cervical vertebra was performed. The residual error after correction and intrafractional motility were calculated. Results: The mean length of the displacement vector for rigid masks was 0.312 {+-} 0.152 cm (intracranial) and 0.586 {+-} 0.294 cm (neck). For the thermoplastic masks, the value was 0.472 {+-} 0.174 cm (intracranial) and 0.726 {+-} 0.445 cm (neck). Rigid masks with body tattoos had a displacement vector length in the neck region of 0.35 {+-} 0.197 cm. The intracranial residual error and intrafractional motility after X-ray volume imaging correction for rigid masks was 0.188 {+-} 0.074 cm, and was 0.134 {+-} 0.14 cm for thermoplastic masks. Conclusions: The results of our study have demonstrated that rigid masks have a high intracranial repositioning accuracy per se. Given the small residual error and intrafractional movement, thermoplastic masks may also be used for high-precision treatments when combined with cone-beam CT. The neck region repositioning accuracy was worse than the intracranial accuracy in both cases. However, body tattoos and image guidance improved the accuracy. Finally, the combination of both mask

  2. On-Line Operating 3-D Seafloor Positioning System (1)

    NASA Astrophysics Data System (ADS)

    Eguchi, T.

    2003-12-01

    We propose a new observation system of on-line 3-D positioning which will be deployed on the sea-bottom of convergent type plate boundaries where large inter-plate seismic events occurred historically. The system has observation sites at assigned intervals along optical fiber cables. Using the several cables, crossing each other, we can construct a real-time operating network of triangular base lines. Each observing site on the cable will be equipped with two-kind high gain instruments i.e., the laser ranging and pressure gauge sensors, as well as additional apparatuses to remove the influence of temperature and salinity etc. on the data. Attenuation rate of visible rays in seawater is relatively smaller at bands of blue-color (wave length; ˜ 450nm) to yellowish green-color ( ˜ 550nm). The attenuation rate of optical signals of blue to yellow-green color in highly transparent seawater is 0.1 ˜ 0.5 dB/m. If we can utilize the high power optical laser output of the blue to yellow-green band for the positioning, the signals can reach the target receiver station with highly sensitive detector located at the distance of 10**2 m or larger. Using additional data of thermal and salinity fields etc. for compensating refractive index of laser signal ray path in clean seawater, we may attain the resolution of laser ranging at an order of 1 mm for each triangular base line with the total length of 1 ˜ 2 km. The base line consists of several secondary positioning stations with the spacing of ˜ 10**2 m. To improve the data resolution, we apply signal processing such as low-pass filtering etc. As is important, we cannot decompose the change of the base line distance data into 3-D individual components. We need another kind data, such as pure vertical coordinate of the positioning sites to resolve the 3-D components. To measure the vertical coordinate of the seafloor stations, we utilize data from the high gain pressure sensor. In the case of crystallized quartz

  3. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing.

    PubMed

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is ±0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient.

  4. [3D-TV health assessment system by the multi-modal physiological signals].

    PubMed

    Li, Zhongqiang; Xing, Lidong; Qian, Zhiyu; Wang, Xiao; Yu, Defei; Liu, Baoyu; Jin, Shuai

    2014-03-01

    In order to meet the requirements of the multi-physiological signal measurement of the 3D-TV health assessment, try to find the suitable biological acquisition chips and design the hardware system which can detect different physiological signals in real time. The systems mainly uses ARM11/S3C6410 microcontroller to control the EEG/EOG acquisition chip RHA2116 and the ECG acquisition chip ADS1298, and then the microcontroller transfer the data collected by the chips to the PC software by the USB port which can display and save the experimental data in real time, then use the Matlab software for further processing of the data, finally make a final health assessment. In the meantime, for the different varieties in the different brain regions of watching 3D-TV, developed the special brain electrode placement and the experimental data processing methods, then effectively disposed the multi-signal data in the multilevel.

  5. A New Display Format Relating Azimuth-Scanning Radar Data and All-Sky Images in 3-D

    NASA Technical Reports Server (NTRS)

    Swartz, Wesley E.; Seker, Ilgin; Mathews, John D.; Aponte, Nestor

    2010-01-01

    Here we correlate features in a sequence of all-sky images of 630 nm airglow with the three-dimensional (3-D) structure of electron densities in the F region above Arecibo. Pairs of 180 azimuth scans (using the Gregorian and line feeds) of the two-beam incoherent scatter radar (ISR) have been plotted in cone pictorials of the line-of-sight electron densities. The plots include projections of the 630 nm airglow onto the ground using the same spatial scaling as for the ISR data. Selected sequential images from the night of 16-17 June 2004 correlate ionospheric plasma features with scales comparable to the ISR density-cone diameter. The entire set of over 100 images spanning about eight hours is available as a movie. The correlation between the airglow and the electron densities is not unexpected, but the new display format shows the 3-D structures better than separate 2-D plots in latitude and longitude for the airglow and in height and time for the electron densities. Furthermore, the animations help separate the bands of airglow from obscuring clouds and the star field.

  6. GeoCube: A 3D mineral resources quantitative prediction and assessment system

    NASA Astrophysics Data System (ADS)

    Li, Ruixi; Wang, Gongwen; Carranza, Emmanuel John Muico

    2016-04-01

    This paper introduces a software system (GeoCube) for three dimensional (3D) extraction and integration of exploration criteria from spatial data. The software system contains four key modules: (1) Import and Export, supporting many formats from commercial 3D geological modeling software and offering various export options; (2) pre-process, containing basic statistics and fractal/multi-fractal methods (concentration-volume (C-V) fractal method) for extraction of exploration criteria from spatial data (i.e., separation of geological, geochemical and geophysical anomalies from background values in 3D space); (3) assessment, supporting five data-driven integration methods (viz., information entropy, logistic regression, ordinary weights of evidence, weighted weights of evidence, boost weights of evidence) for integration of exploration criteria; and (4) post-process, for classifying integration outcomes into several levels based on mineralization potentiality. The Nanihu Mo (W) camp (5.0 km×4.0 km×2.7 km) of the Luanchuan region was used as a case study. The results show that GeoCube can enhance the use of 3D geological modeling to store, retrieve, process, display, analyze and integrate exploration criteria. Furthermore, it was found that the ordinary weights of evidence, boost weights of evidence and logistic regression methods showed superior performance as integration tools for exploration targeting in this case study.

  7. DYNA3D, INGRID, and TAURUS: an integrated, interactive software system for crashworthiness engineering

    SciTech Connect

    Benson, D.J.; Hallquist, J.O.; Stillman, D.W.

    1985-04-01

    Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutive models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations.

  8. Advanced Three-Dimensional Display System

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2005-01-01

    A desktop-scale, computer-controlled display system, initially developed for NASA and now known as the VolumeViewer(TradeMark), generates three-dimensional (3D) images of 3D objects in a display volume. This system differs fundamentally from stereoscopic and holographic display systems: The images generated by this system are truly 3D in that they can be viewed from almost any angle, without the aid of special eyeglasses. It is possible to walk around the system while gazing at its display volume to see a displayed object from a changing perspective, and multiple observers standing at different positions around the display can view the object simultaneously from their individual perspectives, as though the displayed object were a real 3D object. At the time of writing this article, only partial information on the design and principle of operation of the system was available. It is known that the system includes a high-speed, silicon-backplane, ferroelectric-liquid-crystal spatial light modulator (SLM), multiple high-power lasers for projecting images in multiple colors, a rotating helix that serves as a moving screen for displaying voxels [volume cells or volume elements, in analogy to pixels (picture cells or picture elements) in two-dimensional (2D) images], and a host computer. The rotating helix and its motor drive are the only moving parts. Under control by the host computer, a stream of 2D image patterns is generated on the SLM and projected through optics onto the surface of the rotating helix. The system utilizes a parallel pixel/voxel-addressing scheme: All the pixels of the 2D pattern on the SLM are addressed simultaneously by laser beams. This parallel addressing scheme overcomes the difficulty of achieving both high resolution and a high frame rate in a raster scanning or serial addressing scheme. It has been reported that the structure of the system is simple and easy to build, that the optical design and alignment are not difficult, and that the

  9. Image display system 511

    NASA Technical Reports Server (NTRS)

    Gross, M.

    1981-01-01

    The experience of the Idaho Department of Water Resources Remote Sensing Unit in bringing on line their System 511 is described. The system 511 is run on a PDP minicomputer. The minimum system hardware configuration is an 11/34 with a minimum core of 128 K word, 10 megabytes of direct access disk and a floating point processor. The required software configuration is an RSX 11M V 3.2 operating system with a FORTRAN IV plus compiler. The structure of System 511 is a series of hierarchical modular software units. Problems occurring during the systems installation are discussed, and the system operating and error detection capabilities and documentation evaluated.

  10. Tactile 3D microprobe system with exchangeable styli

    NASA Astrophysics Data System (ADS)

    Balzer, Felix G.; Hausotte, Tino; Dorozhovets, Nataliya; Manske, Eberhard; Jäger, Gerd

    2011-09-01

    Over the past decade a trend of component miniaturization can be observed both in industry and in the laboratory, which involves an increasing demand for nanopositioning and nanomeasuring machines as well as for miniature tactile probes for measuring complex three-dimensional objects. The challenge is that these components—for example, diesel injectors, microgears and small optics—feature dimensions in the micrometre range with associated dimensional tolerances below 100 nm. For this reason, a significant number of research projects have dealt with microprobes for performing the dimensional measurements of microstructures with the goal of achieving measurement uncertainties in the nanometre range. This paper introduces an updated version of a 3D microprobe with an optical detection system developed at the Institute of Process Measurement and Sensor Technology. It consists of a measuring head and a separate probe system. The mechanical design of the probe system has been completely overhauled to enable the exchange of the stylus separately from the flexure elements. This is very important for the determination of the probing sphere's roundness deviations. The silicon membranes used in the first system design are therefore replaced by metal membranes. A new design of these membranes, optimized for isotropic probing forces and locking parasitic movements, is presented. Regarding the measuring head, the optical design has been redesigned to eliminate disruptive interference on the quadrant photodiode used for deflection measurement and to improve adjustment. Its dimensioning is discussed, especially the influence of the laser beam diameter on the interference contrast due to the parallel misalignment of the collimated laser beam. Initial measurement results are presented to prove functionality.

  11. Sensorized Garment Augmented 3D Pervasive Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Gulrez, Tauseef; Tognetti, Alessandro; de Rossi, Danilo

    Virtual reality (VR) technology has matured to a point where humans can navigate in virtual scenes; however, providing them with a comfortable fully immersive role in VR remains a challenge. Currently available sensing solutions do not provide ease of deployment, particularly in the seated position due to sensor placement restrictions over the body, and optic-sensing requires a restricted indoor environment to track body movements. Here we present a 52-sensor laden garment interfaced with VR, which offers both portability and unencumbered user movement in a VR environment. This chapter addresses the systems engineering aspects of our pervasive computing solution of the interactive sensorized 3D VR and presents the initial results and future research directions. Participants navigated in a virtual art gallery using natural body movements that were detected by their wearable sensor shirt and then mapped the signals to electrical control signals responsible for VR scene navigation. The initial results are positive, and offer many opportunities for use in computationally intelligentman-machine multimedia control.

  12. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  13. Intra-operative 3D imaging system for robot-assisted fracture manipulation.

    PubMed

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-01-01

    Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).

  14. 3D characterization of the Astor Pass geothermal system, Nevada

    SciTech Connect

    Mayhew, Brett; Faulds, James E

    2013-10-19

    The Astor Pass geothermal system resides in the northwestern part of the Pyramid Lake Paiute Reservation, on the margins of the Basin and Range and Walker Lane tectonic provinces in northwestern Nevada. Seismic reflection interpretation, detailed analysis of well cuttings, stress field analysis, and construction of a 3D geologic model have been used in the characterization of the stratigraphic and structural framework of the geothermal area. The area is primarily comprised of middle Miocene Pyramid sequence volcanic and sedimentary rocks, nonconformably overlying Mesozoic metamorphic and granitic rocks. Wells drilled at Astor Pass show a ~1 km thick section of highly transmissive Miocene volcanic reservoir with temperatures of ~95°C. Seismic reflection interpretation confirms a high fault density in the geothermal area, with many possible fluid pathways penetrating into the relatively impermeable Mesozoic basement. Stress field analysis using borehole breakout data reveals a complex transtensional faulting regime with a regionally consistent west-northwest-trending least principal stress direction. Considering possible strike-slip and normal stress regimes, the stress data were utilized in a slip and dilation tendency analysis of the fault model, which suggests two promising fault areas controlling upwelling geothermal fluids. Both of these fault intersection areas show positive attributes for controlling geothermal fluids, but hydrologic tests show the ~1 km thick volcanic section is highly transmissive. Thus, focused upwellings along discrete fault conduits may be confined to the Mesozoic basement before fluids diffuse into the Miocene volcanic reservoir above. This large diffuse reservoir in the faulted Miocene volcanic rocks is capable of sustaining high pump rates. Understanding this type of system may be helpful in examining large, permeable reservoirs in deep sedimentary basins of the eastern Basin and Range and the highly fractured volcanic geothermal

  15. Fast 3D multiple fan-beam CT systems

    NASA Astrophysics Data System (ADS)

    Kohlbrenner, Adrian; Haemmerle, Stefan; Laib, Andres; Koller, Bruno; Ruegsegger, Peter

    1999-09-01

    Two fast, CCD-based three-dimensional CT scanners for in vivo applications have been developed. One is designed for small laboratory animals and has a voxel size of 20 micrometer, while the other, having a voxel size of 80 micrometer, is used for human examinations. Both instruments make use of a novel multiple fan-beam technique: radiation from a line-focus X-ray tube is divided into a stack of fan-beams by a 28 micrometer pitch foil collimator. The resulting wedge-shaped X-ray field is the key to the instrument's high scanning speed and allows to position the sample close to the X-ray source, which makes it possible to build compact CT systems. In contrast to cone- beam scanners, the multiple fan-beam scanner relies on standard fan-beam algorithms, thereby eliminating inaccuracies in the reconstruction process. The projections from one single rotation are acquired within 2 min and are subsequently reconstructed into a 1024 X 1024 X 255 voxel array. Hence a single rotation about the sample delivers a 3D image containing a quarter of a billion voxels. Such volumetric images are 6.6 mm in height and can be stacked on top of each other. An area CCD sensor bonded to a fiber-optic light guide acts as a detector. Since no image intensifier, conventional optics or tapers are used throughout the system, the image is virtually distortion free. The scanner's high scanning speed and high resolution at moderately low radiation dose are the basis for reliable time serial measurements and analyses.

  16. Display System Image Quality

    DTIC Science & Technology

    1988-04-01

    windecreen movement table and an optical angular deviation measurement device (Task, Genco , Smith, and Dabbs, 1983). For most HUDs, the spectral...ASD(ENA)-TR-83-5019, Dec 1983, pp 11-19. Task, H.L., Genco , L.V., Smith, K., and Dabbs, G., "System for measuring angular deviation in a tranparency

  17. Augmented reality system for oral surgery using 3D auto stereoscopic visualization.

    PubMed

    Tran, Huy Hoang; Suenaga, Hideyuki; Kuwana, Kenta; Masamune, Ken; Dohi, Takeyoshi; Nakajima, Susumu; Liao, Hongen

    2011-01-01

    We present an augmented reality system for oral and maxillofacial surgery in this paper. Instead of being displayed on a separated screen, three-dimensional (3D) virtual presentations of osseous structures and soft tissues are projected onto the patient's body, providing surgeons with exact knowledge of depth information of high risk tissues inside the bone. We employ a 3D integral imaging technique which produce motion parallax in both horizontal and vertical direction over a wide viewing area in this study. In addition, surgeons are able to check the progress of the operation in real-time through an intuitive 3D based interface which is content-rich, hardware accelerated. These features prevent surgeons from penetrating into high risk areas and thus help improve the quality of the operation. Operational tasks such as hole drilling, screw fixation were performed using our system and showed an overall positional error of less than 1 mm. Feasibility of our system was also verified with a human volunteer experiment.

  18. Development of autostereoscopic display system for remote manipulation

    NASA Astrophysics Data System (ADS)

    Honda, Toshio; Kuboshima, Yauhito; Iwane, Kousuke; Shiina, Tatsuo

    2006-02-01

    When a 3D display system is used for remote manipulation, the special glasses for looking at the 3 D-image disturb the manipulation. So auto-stereoscopic display is preferable for remote manipulation work. However, the eye position area of the auto-stereoscopic display which shows the 3D-image is generally narrow. We constructed a 3D display system which solved these problems. In the system, 1.stereoscopic images displayed on the special LCD are projected on a large concave mirror by a projection lens. 2.The viewing-zone limiting aperture is set between the projection lens and the concave mirror. 3. The real image of the aperture plane is made at a certain position in vacant space by the concave mirror, and the image position is the viewing zone. By putting both eyes at the position and looking at the concave mirror plane, the observer can see the stereoscopic image without glasses. To expand the area at which the observer can observe the 3D-image, we proposed and constructed the system of the eye-position tracking of the viewing zone by detecting the eye-position of the observer. An observer can not only move horizontally and vertically by rotating the concave mirror, but also move to front and back by moving the viewing zone limiting aperture in the system.

  19. Overestimation of heights in virtual reality is influenced more by perceived distal size than by the 2-D versus 3-D dimensionality of the display

    NASA Technical Reports Server (NTRS)

    Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)

    2002-01-01

    One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.

  20. 3D reconstruction of tropospheric cirrus clouds by stereovision system

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Moreels, Guy; Seridi, Hamid

    2016-07-01

    A stereo imaging method is applied to measure the altitude of cirrus clouds and provide a 3D map of the altitude of the layer centroid. They are located in the high troposphere and, sometimes in the lower stratosphere, between 6 and 10 km high. Two simultaneous images of the same scene are taken with Canon cameras (400D) in two sites distant of 37 Km. Each image processed in order to invert the perspective effect and provide a satellite-type view of the layer. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a correlation coefficient (ZNCC: Zero mean Normalized Cross-correlation or ZSSD: as Zero mean Sum of Squared Differences). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in June 2014 in France. The images were taken simultaneously at Marnay (47°17'31.5" N, 5°44'58.8" E; altitude 275 m) 25 km northwest of Besancon and in Mont poupet (46°58'31.5" N, 5°52'22.7" E; altitude 600 m) southwest of Besancon at 43 km. 3D maps of the Natural cirrus clouds and artificial like "aircraft trails" are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the cirrus barycenter is located at 8.5 ± 1km on June 11.

  1. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  2. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  3. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay.

    PubMed

    Liao, Hongen; Ishihara, Hirotaka; Tran, Huy Hoang; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi

    2010-01-01

    This paper describes a precision-guided surgical navigation system for minimally invasive surgery. The system combines a laser guidance technique with a three-dimensional (3D) autostereoscopic image overlay technique. Images of surgical anatomic structures superimposed onto the patient are created by employing an animated imaging method called integral videography (IV), which can display geometrically accurate 3D autostereoscopic images and reproduce motion parallax without the need for special viewing or tracking devices. To improve the placement accuracy of surgical instruments, we integrated an image overlay system with a laser guidance system for alignment of the surgical instrument and better visualization of patient's internal structure. We fabricated a laser guidance device and mounted it on an IV image overlay device. Experimental evaluations showed that the system could guide a linear surgical instrument toward a target with an average error of 2.48 mm and standard deviation of 1.76 mm. Further improvement to the design of the laser guidance device and the patient-image registration procedure of the IV image overlay will make this system practical; its use would increase surgical accuracy and reduce invasiveness.

  4. System and method for 3D printing of aerogels

    DOEpatents

    Worsley, Marcus A.; Duoss, Eric; Kuntz, Joshua; Spadaccini, Christopher; Zhu, Cheng

    2016-03-08

    A method of forming an aerogel. The method may involve providing a graphene oxide powder and mixing the graphene oxide powder with a solution to form an ink. A 3D printing technique may be used to write the ink into a catalytic solution that is contained in a fluid containment member to form a wet part. The wet part may then be cured in a sealed container for a predetermined period of time at a predetermined temperature. The cured wet part may then be dried to form a finished aerogel part.

  5. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  6. Interactive toothbrushing education by a smart toothbrush system via 3D visualization.

    PubMed

    Kim, Kyeong-Seop; Yoon, Tae-Ho; Lee, Jeong-Whan; Kim, Dong-Jun

    2009-11-01

    The very first step for keeping good dental hygiene is to employ the correct toothbrushing style. Due to the possible occurrence of periodontal disease at an early age, it is critical to begin correct toothbrushing patterns as early as possible. With this aim, we proposed a novel toothbrush monitoring and training system to interactively educate on toothbrushing behavior in terms of the correct brushing motion and grip axis orientation. Our intelligent toothbrush monitoring system first senses a user's brushing pattern by analyzing the waveforms acquired from a built-in accelerometer and magnetic sensor. To discern the inappropriate toothbrushing style, a real-time interactive three dimensional display system, based on an OpenGL 3D surface rendering scheme, is applied to visualize a subject's brushing patterns and subsequently advise on the correct brushing method.

  7. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present.

  8. Reliability Considerations in 3D Stacked Strata Systems

    NASA Astrophysics Data System (ADS)

    Pozder, Scott; Jain, Ankur; Jones, Robert; Huang, Zhihong; Justison, Patrick; Chatterjee, Ritwik

    2009-06-01

    The bonding of multiple silicon strata to form stacked circuits with high bandwidth connections, increased circuit densities, decreased latency and the capability to stack disparate technologies is increasingly gaining interest in the microelectronics industry. Stacking has been demonstrated using bom dielectric-to-dielectric and metal-to-metal bonds for die and wafer stratum bonding. The considerable thermal, mechanical and electromigration reliability challenges resulting from such bonding has been the focus of some recently reported work. In mis paper, the bond reliability of various bonding types, including wafer-to-wafer dielectric bond, die-to-wafer Cu/Sn-to-Cu bond and a simultaneous organic adhesive with Cu/Sn-to-Cu bond is discussed. Thermomechanical and electromigration characterization of the die-to-wafer 3D structures is also discussed. Results indicate that the intrinsic reliability of these structures can be as robust as current 2D technologies.

  9. Real-Time Display Of 3-D Computed Holograms By Scanning The Image Of An Acousto-Optic Modulator

    NASA Astrophysics Data System (ADS)

    Kollin, Joel S.; Benton, Stephen A.; Jepsen, Mary Lou

    1989-10-01

    The invention of holography has sparked hopes for a three-dimensional electronic imaging systems analogous to television. Unfortunately, the extraordinary spatial detail of ordinary holographic recordings requires unattainable bandwidth and display resolution for three-dimensional moving imagery, effectively preventing their commercial development. However, the essential bandwidth of holographic images can be reduced enough to permit their transmission through fiber optic or coaxial cable, and the required resolution or space-bandwidth product of the display can be obtained by raster scanning the image of a commercially available acousto-optic modulator. No film recording or other photographic intermediate step is necessary as the projected modulator image is viewed directly. The design and construction of a working demonstration of the principles involved is also presented along with a discussion of engineering considerations in the system design. Finally, the theoretical and practical limitations of the system are addressed in the context of extending the system to real-time transmission of moving holograms synthesized from views of real and computer-generated three-dimensional scenes.

  10. Volumetric display system based on three-dimensional scanning of inclined optical image.

    PubMed

    Miyazaki, Daisuke; Shiba, Kensuke; Sotsuka, Koji; Matsushita, Kenji

    2006-12-25

    A volumetric display system based on three-dimensional (3D) scanning of an inclined image is reported. An optical image of a two-dimensional (2D) display, which is a vector-scan display monitor placed obliquely in an optical imaging system, is moved laterally by a galvanometric mirror scanner. Inclined cross-sectional images of a 3D object are displayed on the 2D display in accordance with the position of the image plane to form a 3D image. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision because they are real images formed in a 3D space. Experimental results of volumetric imaging from computed-tomography images and 3D animated images are presented.

  11. Tour of the World’s Largest 3D Printed Polymer Structure on Display at IBS 2016

    SciTech Connect

    Green, Johney

    2016-01-22

    ORNL’s Johney Green guides a Periscope tour of the 3D printed house and vehicle demonstration called AMIE (Additive Manufacturing Integrated Energy) during the International Builders’ Show 2016 in Las Vegas. See the world’s largest 3D printed polymer structure – made with carbon fiber reinforced ABS plastic, insulated with next-generation vacuum insulation panels, and outfitted with a micro-kitchen by GE Appliances – that was designed to be powered by a 3D printed utility vehicle using bidirectional wireless power technology. Learn more about AMIE at https://www.youtube.com/watch?v=RCkQB... and http://www.ornl.gov/amie.

  12. Automated simulation and evaluation of autostereoscopic multiview 3D display designs by time-sequential and wavelength-selective filter barrier

    NASA Astrophysics Data System (ADS)

    Kuhlmey, Mathias; Jurk, Silvio; Duckstein, Bernd; de la Barré, René

    2015-09-01

    A novel simulation tool has been developed for spatial multiplexed 3D displays. Main purpose of our software is the 3D display design with optical image splitter in particular lenticular grids or wavelength-selective barriers. As a result of interaction of image splitter with ray emitting displays a spatial light-modulator generating the autostereoscopic image representation was modeled. Based on the simulation model the interaction of optoelectronic devices with the defined spatial planes is described. Time-sequential multiplexing enables increasing the resolution of such 3D displays. On that reason the program was extended with an intermediate data cumulating component. The simulation program represents a stepwise quasi-static functionality and control of the arrangement. It calculates and renders the whole display ray emission and luminance distribution on viewing distance. The degree of result complexity will increase by using wavelength-selective barriers. Visible images at the viewer's eye positon were determined by simulation after every switching operation of optical image splitter. The summation and evaluation of the resulting data is processed in correspondence to the equivalent time sequence. Hereby the simulation was expanded by a complex algorithm for automated search and validation of possible solutions in the multi-dimensional parameter space. For the multiview 3D display design a combination of ray-tracing and 3D rendering was used. Therefore the emitted light intensity distribution of each subpixel will be evaluated by researching in terms of color, luminance and visible area by using different content distribution on subpixel plane. The analysis of the accumulated data will deliver different solutions distinguished by standards of evaluation.

  13. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image

  14. Solar active region display system

    NASA Astrophysics Data System (ADS)

    Golightly, M.; Raben, V.; Weyland, M.

    2003-04-01

    The Solar Active Region Display System (SARDS) is a client-server application that automatically collects a wide range of solar data and displays it in a format easy for users to assimilate and interpret. Users can rapidly identify active regions of interest or concern from color-coded indicators that visually summarize each region's size, magnetic configuration, recent growth history, and recent flare and CME production. The active region information can be overlaid onto solar maps, multiple solar images, and solar difference images in orthographic, Mercator or cylindrical equidistant projections. Near real-time graphs display the GOES soft and hard x-ray flux, flare events, and daily F10.7 value as a function of time; color-coded indicators show current trends in soft x-ray flux, flare temperature, daily F10.7 flux, and x-ray flare occurrence. Through a separate window up to 4 real-time or static graphs can simultaneously display values of KP, AP, daily F10.7 flux, GOES soft and hard x-ray flux, GOES >10 and >100 MeV proton flux, and Thule neutron monitor count rate. Climatologic displays use color-valued cells to show F10.7 and AP values as a function of Carrington/Bartel's rotation sequences - this format allows users to detect recurrent patterns in solar and geomagnetic activity as well as variations in activity levels over multiple solar cycles. Users can customize many of the display and graph features; all displays can be printed or copied to the system's clipboard for "pasting" into other applications. The system obtains and stores space weather data and images from sources such as the NOAA Space Environment Center, NOAA National Geophysical Data Center, the joint ESA/NASA SOHO spacecraft, and the Kitt Peak National Solar Observatory, and can be extended to include other data series and image sources. Data and images retrieved from the system's database are converted to XML and transported from a central server using HTTP and SOAP protocols, allowing

  15. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time.

  16. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice.

  17. 3D scanning and 3D printing as innovative technologies for fabricating personalized topical drug delivery systems.

    PubMed

    Goyanes, Alvaro; Det-Amornrat, Usanee; Wang, Jie; Basit, Abdul W; Gaisford, Simon

    2016-07-28

    Acne is a multifactorial inflammatory skin disease with high prevalence. In this work, the potential of 3D printing to produce flexible personalised-shape anti-acne drug (salicylic acid) loaded devices was demonstrated by two different 3D printing (3DP) technologies: Fused Deposition Modelling (FDM) and stereolithography (SLA). 3D scanning technology was used to obtain a 3D model of a nose adapted to the morphology of an individual. In FDM 3DP, commercially produced Flex EcoPLA™ (FPLA) and polycaprolactone (PCL) filaments were loaded with salicylic acid by hot melt extrusion (HME) (theoretical drug loading - 2% w/w) and used as feedstock material for 3D printing. Drug loading in the FPLA-salicylic acid and PCL-salicylic acid 3D printed patches was 0.4% w/w and 1.2% w/w respectively, indicating significant thermal degradation of drug during HME and 3D printing. Diffusion testing in Franz cells using a synthetic membrane revealed that the drug loaded printed samples released <187μg/cm(2) within 3h. FPLA-salicylic acid filament was successfully printed as a nose-shape mask by FDM 3DP, but the PCL-salicylic acid filament was not. In the SLA printing process, the drug was dissolved in different mixtures of poly(ethylene glycol) diacrylate (PEGDA) and poly(ethylene glycol) (PEG) that were solidified by the action of a laser beam. SLA printing led to 3D printed devices (nose-shape) with higher resolution and higher drug loading (1.9% w/w) than FDM, with no drug degradation. The results of drug diffusion tests revealed that drug diffusion was faster than with the FDM devices, 229 and 291μg/cm(2) within 3h for the two formulations evaluated. In this study, SLA printing was the more appropriate 3D printing technology to manufacture anti-acne devices with salicylic acid. The combination of 3D scanning and 3D printing has the potential to offer solutions to produce personalised drug loaded devices, adapted in shape and size to individual patients.

  18. Disparity pattern-based autostereoscopic 3D metrology system for in situ measurement of microstructured surfaces.

    PubMed

    Li, Da; Cheung, Chi Fai; Ren, MingJun; Whitehouse, David; Zhao, Xing

    2015-11-15

    This paper presents a disparity pattern-based autostereoscopic (DPA) 3D metrology system that makes use of a microlens array to capture raw 3D information of the measured surface in a single snapshot through a CCD camera. Hence, a 3D digital model of the target surface with the measuring data is generated through a system-associated direct extraction of disparity information (DEDI) method. The DEDI method is highly efficient for performing the direct 3D mapping of the target surface based on tomography-like operation upon every depth plane with the defocused information excluded. Precise measurement results are provided through an error-elimination process based on statistical analysis. Experimental results show that the proposed DPA 3D metrology system is capable of measuring 3D microstructured surfaces with submicrometer measuring repeatability for high precision and in situ measurement of microstructured surfaces.

  19. A hand-held 3D laser scanning with global positioning system of subvoxel precision

    NASA Astrophysics Data System (ADS)

    Arias, Néstor; Meneses, Néstor; Meneses, Jaime; Gharbi, Tijani

    2011-01-01

    In this paper we propose a hand-held 3D laser scanner composed of an optical head device to extract 3D local surface information and a stereo vision system with subvoxel precision to measure the position and orientation of the 3D optical head. The optical head is manually scanned over the surface object by the operator. The orientation and position of the 3D optical head is determined by a phase-sensitive method using a 2D regular intensity pattern. This phase reference pattern is rigidly fixed to the optical head and allows their 3D location with subvoxel precision in the observation field of the stereo vision system. The 3D resolution achieved by the stereo vision system is about 33 microns at 1.8 m with an observation field of 60cm x 60cm.

  20. Three-dimensional display systems implemented with a micromirror array

    NASA Astrophysics Data System (ADS)

    Yan, Jun

    A novel approach for three-dimensional (3-D) display systems implemented with a micromirror array was proposed, designed, realized and tested. The major advantages of this approach include: (1) micromirrors are reflective and hence achromatic (panchromatic), (2) a wide variety of displays can be used as image sources, and (3) time-multiplexing can be introduced on top of space-multiplexing to optimize the viewing-zone arrangements. Real-time auto-stereoscopy and motion parallax were the goals for these single-user 3-D display systems. First, auto-stereoscopy allows an observer see left and right images without any special eyewear or head-tracking devices. Second, different pairs of stereoscopic images can be seen according to the viewer's head position under horizontal displacement, denoted by series of viewing zones, so horizontal motion parallax is provided. These 3-D display systems use two spatial light modulators (SLM). The first one acts as the image source, which is relayed onto the second SLM, a micromirror array. Micromirrors redirect the light into appropriate viewing zones. We used backlit transparencies and a color CRT as the first SLM, which exemplifies the wide acceptance of image sources. Three simplifications in the optical design were made to lower the actuation requirements of the micromirrors. First, a collecting lens was introduced so that the micromirrors needed uniform actuation in one dimension (horizontal). Second, an interleaved actuation profile of the micromirrors was introduced to dedicate odd columns of micromirrors for the right eye views and even columns for the left ones. Finally, a double-opening pupil was used to further lower the actuation requirements of the micromirrors. A two-view (left and right) 3-D auto-stereoscopic display system was first constructed. Left and right eye views in the forms of both still and motion 3-D scenes were displayed and viewers were able to fuse the stereo information. A multi-view (2 left and 2

  1. Tour of the World’s Largest 3D Printed Polymer Structure on Display at IBS 2016

    ScienceCinema

    Green, Johney

    2016-07-12

    ORNL’s Johney Green guides a Periscope tour of the 3D printed house and vehicle demonstration called AMIE (Additive Manufacturing Integrated Energy) during the International Builders’ Show 2016 in Las Vegas. See the world’s largest 3D printed polymer structure – made with carbon fiber reinforced ABS plastic, insulated with next-generation vacuum insulation panels, and outfitted with a micro-kitchen by GE Appliances – that was designed to be powered by a 3D printed utility vehicle using bidirectional wireless power technology. Learn more about AMIE at https://www.youtube.com/watch?v=RCkQB... and http://www.ornl.gov/amie.

  2. Building a 3D scanner system based on monocular vision.

    PubMed

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  3. New neural-networks-based 3D object recognition system

    NASA Astrophysics Data System (ADS)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  4. A 3D metrology system for the GMT

    NASA Astrophysics Data System (ADS)

    Rakich, A.; Dettmann, Lee; Leveque, S.; Guisard, S.

    2016-08-01

    The Giant Magellan Telescope (GMT)1 is a 25 m telescope composed of seven 8.4 m "unit telescopes", on a common mount. Each primary and conjugated secondary mirror segment will feed a common instrument interface, their focal planes co-aligned and co-phased. During telescope operation, the alignment of the optical components will deflect due to variations in thermal environment and gravity induced structural flexure of the mount. The ultimate co-alignment and co-phasing of the telescope is achieved by a combination of the Acquisition Guiding and Wavefront Sensing system and two segment edge-sensing systems2. An analysis of the capture range of the wavefront sensing system indicates that it is unlikely that that system will operate efficiently or reliably with initial mirror positions provided by open-loop corrections alone3. The project is developing a Telescope Metrology System (TMS) which incorporates a large number of absolute distance measuring interferometers. The system will align optical components of the telescope to the instrument interface to (well) within the capture range of the active optics wavefront sensing systems. The advantages offered by this technological approach to a TMS, over a network of laser trackers, are discussed. Initial investigations of the Etalon Absolute Multiline Technology™ by Etalon Ag4 show that a metrology network based on this product is capable of meeting requirements. A conceptual design of the system is presented and expected performance is discussed.

  5. Toward a classification of semidegenerate 3D superintegrable systems

    NASA Astrophysics Data System (ADS)

    Escobar-Ruiz, M. A.; Miller, Willard, Jr.

    2017-03-01

    Superintegrable systems of 2nd order in 3 dimensions with exactly 3-parameter potentials are intriguing objects. Next to the nondegenerate 4-parameter potential systems they admit the maximum number of symmetry operators, but their symmetry algebras do not close under commutation and not enough is known about their structure to give a complete classification. Some examples are known for which the 3-parameter system can be extended to a 4th order superintegrable system with a 4-parameter potential and 6 linearly independent symmetry generators. In this paper we use Bôcher contractions of the conformal Lie algebra so≤ft(5,{C}\\right) to itself to generate a large family of 3-parameter systems with 4th order extensions, on a variety of manifolds, all from Bôcher contractions of a single ‘generic’ system on the 3-sphere. We give a contraction scheme relating these systems. The results have myriad applications for finding explicit solutions for both quantum and classical systems.

  6. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  7. A View to the Future: A Novel Approach for 3D-3D Superimposition and Quantification of Differences for Identification from Next-Generation Video Surveillance Systems.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    Techniques of 2D-3D superimposition are widely used in cases of personal identification from video surveillance systems. However, the progressive improvement of 3D image acquisition technology will enable operators to perform also 3D-3D facial superimposition. This study aims at analyzing the possible applications of 3D-3D superimposition to personal identification, although from a theoretical point of view. Twenty subjects underwent a facial 3D scan by stereophotogrammetry twice at different time periods. Scans were superimposed two by two according to nine landmarks, and root-mean-square (RMS) value of point-to-point distances was calculated. When the two superimposed models belonged to the same individual, RMS value was 2.10 mm, while it was 4.47 mm in mismatches with a statistically significant difference (p < 0.0001). This experiment shows the potential of 3D-3D superimposition: Further studies are needed to ascertain technical limits which may occur in practice and to improve methods useful in the forensic practice.

  8. High definition 3D imaging lidar system using CCD

    NASA Astrophysics Data System (ADS)

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong

    2016-10-01

    In this study we propose and demonstrate a novel technique for measuring distance with high definition three-dimensional imaging. To meet the stringent requirements of various missions, spatial resolution and range precision are important properties for flash LIDAR systems. The proposed LIDAR system employs a polarization modulator and a CCD. When a laser pulse is emitted from the laser, it triggers the polarization modulator. The laser pulse is scattered by the target and is reflected back to the LIDAR system while the polarization modulator is rotating. Its polarization state is a function of time. The laser-return pulse passes through the polarization modulator in a certain polarization state, and the polarization state is calculated using the intensities of the laser pulses measured by the CCD. Because the function of the time and the polarization state is already known, the polarization state can be converted to time-of-flight. By adopting a polarization modulator and a CCD and only measuring the energy of a laser pulse to obtain range, a high resolution three-dimensional image can be acquired by the proposed three-dimensional imaging LIDAR system. Since this system only measures the energy of the laser pulse, a high bandwidth detector and a high resolution TDC are not required for high range precision. The proposed method is expected to be an alternative method for many three-dimensional imaging LIDAR system applications that require high resolution.

  9. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  10. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  11. Construction of 3-D Audio Systems: Background, Research, and General Requirements

    DTIC Science & Technology

    2008-10-01

    may be more important to have a spectrum that contains some constant bands with high amplitude (Blauert, 1969/70; Sextant , 1997). In a comparison of...Technology. Retrieved October 2, 2006, from http://www.sensaura.com Sextant . (1997). AUDIS multipurpose AUditory DISplay for 3-D hearing applications

  12. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  13. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  14. A 3-D fluorescence imaging system incorporating structured illumination technology

    NASA Astrophysics Data System (ADS)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  15. Optical characterization of auto-stereoscopic 3D displays: interest of the resolution and comparison to human eye properties

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2014-02-01

    Optical characterization of multi-view auto-stereoscopic displays is realized using high angular resolution viewing angle measurements and imaging measurements. View to view and global qualified binocular viewing space are computed from viewing angle measurements and verified using imaging measurements. Crosstalk uniformity is also deduced and related to display imperfections.

  16. 3D heterostructures and systems for novel MEMS/NEMS

    PubMed Central

    Yakovlevich Prinz, Victor; Alexandrovich Seleznev, Vladimir; Victorovich Prinz, Alexander; Vladimirovich Kopylov, Alexander

    2009-01-01

    In this review, we consider the application of solid micro- and nanostructures of various shapes as building blocks for micro-electro-mechanical or nano-electro-mechanical systems (MEMS/NEMS). We provide examples of practical applications of structures created by MEMS/NEMS fabrication. Novel devices are briefly described, such as a high-power electrostatic nanoactuator, a fast-response tubular anemometer for measuring gas and liquid flows, a nanoprinter, a nanosyringe and optical MEMS/NEMS. The prospects are described for achieving NEMS with tunable quantum properties. PMID:27877295

  17. A primitive-based 3D object recognition system

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  18. On 3D Riesz systems of harmonic conjugates

    NASA Astrophysics Data System (ADS)

    Avetisyan, K.; Gürlebeck, K.; Morais, J.

    2012-11-01

    This note announces some results that will be presented in the forthcoming paper [10]. In continuation to these studies we discuss a constructive approach for the generation of harmonic conjugates to find nullsolutions to the Riesz system in R3. This class of solutions coincides with the subclass of monogenic functions with values in the reduced quaternions. The algorithm for harmonic conjugates is presented by means of an integral representation. Additionally, we discuss the weighted (monogenic) Hardy and Bergman spaces on the unit ball in R3 consisting of functions with values in the reduced quaternions. We end up showing the boundedness of the underlying harmonic conjugation operators in certain weighted spaces.

  19. 3D two-photon lithographic microfabrication system

    DOEpatents

    Kim, Daekeun; So, Peter T. C.

    2011-03-08

    An imaging system is provided that includes a optical pulse generator for providing an optical pulse having a spectral bandwidth and includes monochromatic waves having different wavelengths. A dispersive element receives a second optical pulse associated with the optical pulse and disperses the second optical pulse at different angles on the surface of the dispersive element depending on wavelength. One or more focal elements receives the dispersed second optical pulse produced on the dispersive element. The one or more focal element recombine the dispersed second optical pulse at a focal plane on a specimen where the width of the optical pulse is restored at the focal plane.

  20. High pressure system for 3-D study of elastic anisotropy

    NASA Astrophysics Data System (ADS)

    Lokajicek, T.; Pros, Z.; Klima, K.

    2003-04-01

    New high pressure system was designed for the study of elastic anisotropy of condensed matter under high confining pressure up to 700 MPa. Simultaneously could be measured dynamic and static parameters: a) dynamic parameters by ultrasonic sounding, b) static parameters by measuring of spherical sample deformation. The measurement is carried out on spherical samples diameter 50 +/- 0.01 mm. Higher value of confining pressure was reached due to the new construction of sample positioning unit. The positioning unit is equipped with two Portecap step motors, which are located inside the vessel and make possible to rotate with the sphere and couple of piezoceramic transducers. Sample deformation is measured in the same direction as ultrasonic signal travel time. Only electric leads connects inner part of high pressure vessel with surrounding environment. Experimental set up enables: - simultaneous P-wave ultrasonic sounding, - measurement of current sample deformation at sounding points, - measurement of current value of confining pressure and - measurement of current stress media temperature. Air driven high pressure pump Haskel is used to produce high value of confining pressure up to 700 MPa. Ultrasonic signals are recorded by digital scope Agilent 54562 with sampling frequency 100 MHz. Control and measuring software was developed under Agilent VEE software environment working under MS Win 2000 operating system. Measuring set up was tested by measurement of monomineral spherical samples of quartz and corundum. Both of them have trigonal symmetry. The measurement showed that the P-wave velocity range of quartz was between 5.7-7.0 km/sec. and velocity range of corundum was between 9.7-10.9 km/sec. High pressure resistant LVDT transducers Mesing together with Intronix electronic unit were used to monitor sample deformation. Sample deformation is monitored with the accuracy of 0.1 micron. All test measurements proved the good accuracy of the whole measuring set up. This

  1. Development of a 3-D data acquisition system for human facial imaging

    NASA Astrophysics Data System (ADS)

    Marshall, Stephen J.; Rixon, R. C.; Whiteford, Don N.; Wells, Peter J.; Powell, S. J.

    1990-07-01

    While preparing to conduct human facial surgery, it is necessary to visualise the effects of proposed surgery on the patient's appearance. This visualisation is of great benefit to both surgeon and patient, and has traditionally been achieved by the manual manipulation of photographs. Technological developments in the areas of computer-aided design and optical sensing now make it possible to construct a computer-based imaging system which can simulate the effects of facial surgery on patients. A collaborative project with the aim of constructing a prototype facial imaging system is under way between the National Engineering Laboratory and St George's Hospital. The proposed system will acquire, display and manipulate 3-dimensional facial images of patients requiring facial surgery. The feasibility of using two NEL developed optical measurement methods for 3-D facial data acquisition had been established by their successful application to the measurement of dummy heads. The two optical measurement systems, the NEL Auto-MATE moire fringe contouring system and the NEL STRIPE laser scanning triangulation system, were further developed to adapt them for use in facial imaging and additional tests carried out in which emphasis was placed on the use of live human subjects. The knowledge gained in the execution of the tests enabled the selection of the most suitable of the two methods studied for facial data acquisition. A full description of the methods and equipment used in the study will be given. Additionally, work on the effects of the quality and quantity of measurement data on the facial image will be described. Finally, the question of how best to provide display and manipulation of the facial images will be addressed.

  2. Characterization of 3D printing output using an optical sensing system

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    This paper presents the experimental design and initial testing of a system to characterize the progress and performance of a 3D printer. The system is based on five Raspberry Pi single-board computers. It collects images of the 3D printed object, which are compared to an ideal model. The system, while suitable for printers of all sizes, can potentially be produced at a sufficiently low cost to allow its incorporation into consumer-grade printers. The efficacy and accuracy of this system is presented and discussed. The paper concludes with a discussion of the benefits of being able to characterize 3D printer performance.

  3. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology.

    PubMed

    Lee, Vivian K; Lanzi, Alison M; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A; Dai, Guohao

    2014-09-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns.

  4. Kinematics of a growth fault/raft system on the West African margin using 3-D restoration

    NASA Astrophysics Data System (ADS)

    Rouby, Delphine; Raillard, Stéphane; Guillocheau, François; Bouroullec, Renaud; Nalpas, Thierry

    2002-04-01

    The ability to quantify the movement history associated with growth structures is crucial in the understanding of fundamental processes such as the growth of folds or faults in 3-D. In this paper, we present an application of an original approach to restore in 3-D a listric growth fault system resulting from gravity-induced extension located on the West African margin. Our goal is to establish the 3-D structural framework and kinematics of the study area. We construct a 3-D geometrical model of the fault system (from 3-D seismic data), then restore six stratigraphic surfaces and reconstruct the 3-D geometry of the system at six incremental steps of its history. The evolution of the growth fault/raft system corresponds to the progressive separation of two rafts by regional extension, resulting in the development of an intervening basin located between them that evolved in three main stages: (1) the rise of an evaporite wall, (2) the development of a symmetric basin as the elevation of the diapir is reduced and buried, and (3) the development of asymmetric basins related to two systems of listric faults (the main fault F1 and the graben located between the rollovers and the lower raft). Important features of the growth fault/raft system could only be observed in 3-D and with increments of deformation restored. The rollover anticline (associated with the listric fault F1) is composed of two sub-units separated by an E-W oriented transverse graben indicating that the displacement field was divergent in map view. The rollover units are located within the overlap area of two fault systems and displays a 'mock-turtle' anticline structure. The seaward translation of the lower raft is associated with two successive vertical axis rotations in the opposite sense (clockwise then counter-clockwise by about 10°). This results from the fact that the two main fault systems developed successively. Fault system F1 formed during the Upper Albian, and the graben during the Cenomanian

  5. Impact of the 3-D model strategy on science learning of the solar system

    NASA Astrophysics Data System (ADS)

    Alharbi, Mohammed

    The purpose of this mixed method study, quantitative and descriptive, was to determine whether the first-middle grade (seventh grade) students at Saudi schools are able to learn and use the Autodesk Maya software to interact and create their own 3-D models and animations and whether their use of the software influences their study habits and their understanding of the school subject matter. The study revealed that there is value to the science students regarding the use of 3-D software to create 3-D models to complete science assignments. Also, this study aimed to address the middle-school students' ability to learn 3-D software in art class, and then ultimately use it in their science class. The success of this study may open the way to consider the impact of 3-D modeling on other school subjects, such as mathematics, art, and geography. When the students start using graphic design, including 3-D software, at a young age, they tend to develop personal creativity and skills. The success of this study, if applied in schools, will provide the community with skillful young designers and increase awareness of graphic design and the new 3-D technology. Experimental method was used to answer the quantitative research question, are there significant differences applying the learning method using 3-D models (no 3-D, premade 3-D, and create 3-D) in a science class being taught about the solar system and its impact on the students' science achievement scores? Descriptive method was used to answer the qualitative research questions that are about the difficulty of learning and using Autodesk Maya software, time that students take to use the basic levels of Polygon and Animation parts of the Autodesk Maya software, and level of students' work quality.

  6. Simplified Night Sky Display System

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy P.

    2010-01-01

    A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.

  7. A fast 3D reconstruction system with a low-cost camera accessory

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-06-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  8. A fast 3D reconstruction system with a low-cost camera accessory

    PubMed Central

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-01-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407

  9. Wide angle holographic display system with spatiotemporal multiplexing.

    PubMed

    Kozacki, Tomasz; Finke, Grzegorz; Garbat, Piotr; Zaperty, Weronika; Kujawińska, Małgorzata

    2012-12-03

    This paper presents a wide angle holographic display system with extended viewing angle in both horizontal and vertical directions. The display is constructed from six spatial light modulators (SLM) arranged on a circle and an additional SLM used for spatiotemporal multiplexing and a viewing angle extension in two perpendicular directions. The additional SLM, that is synchronized with the SLMs on the circle is placed in the image space. This method increases effective space bandwidth product of display system data from 12.4 to 50 megapixels. The software solution based on three Nvidia graphic cards is developed and implemented in order to achieve fast and synchronized displaying. The experiments presented for both synthetic and real 3D data prove the possibility to view binocularly having good quality images reconstructed in full FoV of the display.

  10. Development of portable 3D optical measuring system using structured light projection method

    NASA Astrophysics Data System (ADS)

    Aoki, Hiroshi

    2014-05-01

    Three-dimensional (3D) scanners are becoming increasingly common in many industries. However most of these scanning technologies have drawbacks for practical use due to size, weight, accessibility, and ease-of-use. Depending on the application, speed, flexibility and portability can often be deemed more important than accuracy. We have developed a solution to address this market requirement and overcome the aforementioned limitations. To counteract shortcomings such as heavy weight and large size, an optical sensor is used that consists of a laser projector, a camera system, and a multi-touch screen. Structured laser light is projected onto the measured object with a newly designed laser projector employing a single Micro Electro Mechanical Systems (MEMS) mirror. The optical system is optimized for the combination of a Laser Diode (LD), the MEMS mirror and the size of measurement area to secure the ideal contrast of structured light. Also, we developed a new calibration algorithm for this sensor with MEMS laser projector that uses an optical camera model for point cloud calculation. These technical advancements make the sensor compact, save power consumption, and reduce heat generation yet still allows for rapid calculation. Due to the principle of the measurement, structured light triangulation utilizing phase-shifting technology, resolution is improved. To meet requirements for practical applications, the optics, electronics, image processing, display and data management capabilities have been integrated into a single compact unit.

  11. Development of 3D Woven Ablative Thermal Protection Systems (TPS) for NASA Spacecraft

    NASA Technical Reports Server (NTRS)

    Feldman, Jay D.; Ellerby, Don; Stackpoole, Mairead; Peterson, Keith; Venkatapathy, Ethiraj

    2015-01-01

    The development of a new class of thermal protection system (TPS) materials known as 3D Woven TPS led by the Entry Systems and Technology Division of NASA Ames Research Center (ARC) will be discussed. This effort utilizes 3D weaving and resin infusion technologies to produce heat shield materials that are engineered and optimized for specific missions and requirements. A wide range of architectures and compositions have been produced and preliminarily tested to prove the viability and tailorability of the 3D weaving approach to TPS.

  12. A 3D acquisition system combination of structured-light scanning and shape from silhouette

    NASA Astrophysics Data System (ADS)

    Sun, Changku; Tao, Li; Wang, Peng; He, Li

    2006-05-01

    A robust and accurate three dimensional (3D) acquisition system is presented, which is a combination of structured-light scanning and shape from silhouette. Using common world coordinate system, two groups of point data can be integrated into the final complete 3D model without any integration and registration algorithm. The mathematics model of structured-light scanning is described in detail, and the shape from silhouette algorithm is introduced as well. The complete 3D model of a cup with a handle is obtained successfully by the proposed technique. At last the measurement on a ball bearing is performed, with the measurement precision better than 0.15 mm.

  13. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  14. Three-Dimensional Integrated Characterization and Archiving System (3D-ICAS). Phase 1

    SciTech Connect

    1994-07-01

    3D-ICAS is being developed to support Decontamination and Decommissioning operations for DOE addressing Research Area 6 (characterization) of the Program Research and Development Announcement. 3D-ICAS provides in-situ 3-dimensional characterization of contaminated DOE facilities. Its multisensor probe contains a GC/MS (gas chromatography/mass spectrometry using noncontact infrared heating) sensor for organics, a molecular vibrational sensor for base material identification, and a radionuclide sensor for radioactive contaminants. It will provide real-time quantitative measurements of volatile organics and radionuclides on bare materials (concrete, asbestos, transite); it will provide 3-D display of the fusion of all measurements; and it will archive the measurements for regulatory documentation. It consists of two robotic mobile platforms that operate in hazardous environments linked to an integrated workstation in a safe environment.

  15. Microscale screening systems for 3D cellular microenvironments: platforms, advances, and challenges.

    PubMed

    Montanez-Sauri, Sara I; Beebe, David J; Sung, Kyung Eun

    2015-01-01

    The increasing interest in studying cells using more in vivo-like three-dimensional (3D) microenvironments has created a need for advanced 3D screening platforms with enhanced functionalities and increased throughput. 3D screening platforms that better mimic in vivo microenvironments with enhanced throughput would provide more in-depth understanding of the complexity and heterogeneity of microenvironments. The platforms would also better predict the toxicity and efficacy of potential drugs in physiologically relevant conditions. Traditional 3D culture models (e.g., spinner flasks, gyratory rotation devices, non-adhesive surfaces, polymers) were developed to create 3D multicellular structures. However, these traditional systems require large volumes of reagents and cells, and are not compatible with high-throughput screening (HTS) systems. Microscale technology offers the miniaturization of 3D cultures and allows efficient screening of various conditions. This review will discuss the development, most influential works, and current advantages and challenges of microscale culture systems for screening cells in 3D microenvironments.

  16. IMPROMPTU: a system for automatic 3D medical image-analysis.

    PubMed

    Sundaramoorthy, G; Hoford, J D; Hoffman, E A; Higgins, W E

    1995-01-01

    The utility of three-dimensional (3D) medical imaging is hampered by difficulties in extracting anatomical regions and making measurements in 3D images. Presently, a user is generally forced to use time-consuming, subjective, manual methods, such as slice tracing and region painting, to define regions of interest. Automatic image-analysis methods can ameliorate the difficulties of manual methods. This paper describes a graphical user interface (GUI) system for constructing automatic image-analysis processes for 3D medical-imaging applications. The system, referred to as IMPROMPTU, provides a user-friendly environment for prototyping, testing and executing complex image-analysis processes. IMPROMPTU can stand alone or it can interact with an existing graphics-based 3D medical image-analysis package (VIDA), giving a strong environment for 3D image-analysis, consisting of tools for visualization, manual interaction, and automatic processing. IMPROMPTU links to a large library of 1D, 2D, and 3D image-processing functions, referred to as VIPLIB, but a user can easily link in custom-made functions. 3D applications of the system are given for left-ventricular chamber, myocardial, and upper-airway extractions.

  17. Microscale screening systems for 3D cellular microenvironments: platforms, advances, and challenges

    PubMed Central

    Montanez-Sauri, Sara I.; Beebe, David J.; Sung, Kyung Eun

    2015-01-01

    The increasing interest in studying cells using more in vivo-like three-dimensional (3D) microenvironments has created a need for advanced 3D screening platforms with enhanced functionalities and increased throughput. 3D screening platforms that better mimic in vivo microenvironments with enhanced throughput would provide more in-depth understanding of the complexity and heterogeneity of microenvironments. The platforms would also better predict the toxicity and efficacy of potential drugs in physiologically relevant conditions. Traditional 3D culture models (e.g. spinner flasks, gyratory rotation devices, non-adhesive surfaces, polymers) were developed to create 3D multicellular structures. However, these traditional systems require large volumes of reagents and cells, and are not compatible with high throughput screening (HTS) systems. Microscale technology offers the miniaturization of 3D cultures and allows efficient screening of various conditions. This review will discuss the development, most influential works, and current advantages and challenges of microscale culture systems for screening cells in 3D microenvironments. PMID:25274061

  18. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

    SciTech Connect

    James Menart

    2013-06-07

    This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

  19. 3D and 4D atlas system of living human body structure.

    PubMed

    Suzuki, N; Takatsu, A; Hattori, A; Ezumi, T; Oda, S; Yanai, T; Tominaga, H

    1998-01-01

    A reference system for accessing anatomical information from a complete 3D structure of the whole body "living human", including 4D cardiac dynamics, was reconstructed with 3D and 4D data sets obtained from normal volunteers. With this system, we were able to produce a human atlas in which sectional images can be accessed from any part of the human body interactively by real-time image generation.

  20. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  1. Development of Land Analysis System display modules

    NASA Technical Reports Server (NTRS)

    Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie

    1986-01-01

    The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.

  2. Six-Message Electromechanical Display System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.

    2007-01-01

    A proposed electromechanical display system would be capable of presenting as many as six distinct messages. In the proposed system, each display element would include a cylinder having a regular hexagonal cross section.

  3. 3-D Microwell Array System for Culturing Virus Infected Tumor Cells

    PubMed Central

    El Assal, Rami; Gurkan, Umut A.; Chen, Pu; Juillard, Franceline; Tocchio, Alessandro; Chinnasamy, Thiruppathiraja; Beauchemin, Chantal; Unluisler, Sebnem; Canikyan, Serli; Holman, Alyssa; Srivatsa, Srikar; Kaye, Kenneth M.; Demirci, Utkan

    2016-01-01

    Cancer cells have been increasingly grown in pharmaceutical research to understand tumorigenesis and develop new therapeutic drugs. Currently, cells are typically grown using two-dimensional (2-D) cell culture approaches, where the native tumor microenvironment is difficult to recapitulate. Thus, one of the main obstacles in oncology is the lack of proper infection models that recount main features present in tumors. In recent years, microtechnology-based platforms have been employed to generate three-dimensional (3-D) models that better mimic the native microenvironment in cell culture. Here, we present an innovative approach to culture Kaposi’s sarcoma-associated herpesvirus (KSHV) infected human B cells in 3-D using a microwell array system. The results demonstrate that the KSHV-infected B cells can be grown up to 15 days in a 3-D culture. Compared with 2-D, cells grown in 3-D had increased numbers of KSHV latency-associated nuclear antigen (LANA) dots, as detected by immunofluorescence microscopy, indicating a higher viral genome copy number. Cells in 3-D also demonstrated a higher rate of lytic reactivation. The 3-D microwell array system has the potential to improve 3-D cell oncology models and allow for better-controlled studies for drug discovery. PMID:28004818

  4. 3-D Microwell Array System for Culturing Virus Infected Tumor Cells.

    PubMed

    El Assal, Rami; Gurkan, Umut A; Chen, Pu; Juillard, Franceline; Tocchio, Alessandro; Chinnasamy, Thiruppathiraja; Beauchemin, Chantal; Unluisler, Sebnem; Canikyan, Serli; Holman, Alyssa; Srivatsa, Srikar; Kaye, Kenneth M; Demirci, Utkan

    2016-12-22

    Cancer cells have been increasingly grown in pharmaceutical research to understand tumorigenesis and develop new therapeutic drugs. Currently, cells are typically grown using two-dimensional (2-D) cell culture approaches, where the native tumor microenvironment is difficult to recapitulate. Thus, one of the main obstacles in oncology is the lack of proper infection models that recount main features present in tumors. In recent years, microtechnology-based platforms have been employed to generate three-dimensional (3-D) models that better mimic the native microenvironment in cell culture. Here, we present an innovative approach to culture Kaposi's sarcoma-associated herpesvirus (KSHV) infected human B cells in 3-D using a microwell array system. The results demonstrate that the KSHV-infected B cells can be grown up to 15 days in a 3-D culture. Compared with 2-D, cells grown in 3-D had increased numbers of KSHV latency-associated nuclear antigen (LANA) dots, as detected by immunofluorescence microscopy, indicating a higher viral genome copy number. Cells in 3-D also demonstrated a higher rate of lytic reactivation. The 3-D microwell array system has the potential to improve 3-D cell oncology models and allow for better-controlled studies for drug discovery.

  5. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  6. Development of the crone seedlings handling system using 3D-sensor and force control gripper

    NASA Astrophysics Data System (ADS)

    Hojo, Hirotaka; Takarada, Hiroshi; Hiroyasu, Takahisa; Hata, Seiji

    2005-12-01

    The crone seedlings have unstable form and it is hard to handle. In order to transplant crone seedlings automatically, the functions of 3D-shape recognition and force control of grippers are indispensable. We have introduced the new handling technology which combines the 3D-mesurement using the relative stereo method and gripping method by gripping stroke control for high elasticity forceps structure. In this gripping method, the gripping force is controlled according to the shoot diameter which is measured by 3d-mesurment of relative stereo method. The experimental crone seedlings transplant system using the new handling technique has been shown.

  7. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  8. Full-resolution autostereoscopic display using an all-electronic tracking/steering system

    NASA Astrophysics Data System (ADS)

    Gaudreau, Jean-Etienne

    2012-03-01

    PolarScreens is developing a new 3D display technology capable of displaying full HD resolution in each eye without the need for glasses. The technology combines a regular backlight, a 120Hz 3D LCD panel, a vertical Patterned active shutter panel and a head tracking system. The technology relies on a 12-sub-pixel wide alternated pattern encoded in the stereo image to follow the head movement. Alternatively for a passive 3D display, the barrier is made of vertical strip Polarizer Film. This can be applied to any full resolution polarized display like iZ3D, Perceiva, or active retarder 3D display. The end result is a full resolution autostereoscopic display with complete head movement freedom. There are no mechanical moving part (like lenticular) or extra active components to steer the correct L/R image to the user's eyes. The new display has the capacity of displaying 2D/3D information on a pixel per pixel base so there is no need for full screen or windowed 2D/3D switchable apparatus.

  9. The Effect of Interocular Contrast and Ocular Dominance on the Perception of Motion-in-Depth in 3-D Displays

    DTIC Science & Technology

    1981-08-01

    and L4 , multiple-element camera lenses; LP, lens pair; AP, artificial pupil; DS, display screen; Mv, mirror that rotates the visual field vertically...artifact placed in this plane, as shown in Fig. 10. If ST3 is out signals because the total light reflected from the fundus of focus on the retina, its...third servomotor. Lenses L1 , rotation in the second image falls on the axis of mirror L 2, L., and L 4 are actually multiple-element camera lenses

  10. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  11. Imaging the behavior of molecules in biological systems: breaking the 3D speed barrier with 3D multi-resolution microscopy.

    PubMed

    Welsher, Kevin; Yang, Haw

    2015-01-01

    The overwhelming effort in the development of new microscopy methods has been focused on increasing the spatial and temporal resolution in all three dimensions to enable the measurement of the molecular scale phenomena at the heart of biological processes. However, there exists a significant speed barrier to existing 3D imaging methods, which is associated with the overhead required to image large volumes. This overhead can be overcome to provide nearly unlimited temporal precision by simply focusing on a single molecule or particle via real-time 3D single-particle tracking and the newly developed 3D Multi-resolution Microscopy (3D-MM). Here, we investigate the optical and mechanical limits of real-time 3D single-particle tracking in the context of other methods. In particular, we investigate the use of an optical cantilever for position sensitive detection, finding that this method yields system magnifications of over 3000×. We also investigate the ideal PID control parameters and their effect on the power spectrum of simulated trajectories. Taken together, these data suggest that the speed limit in real-time 3D single particle-tracking is a result of slow piezoelectric stage response as opposed to optical sensitivity or PID control.

  12. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  13. Video display engineering and optimization system

    NASA Technical Reports Server (NTRS)

    Larimer, James (Inventor)

    1997-01-01

    A video display engineering and optimization CAD simulation system for designing a LCD display integrates models of a display device circuit, electro-optics, surface geometry, and physiological optics to model the system performance of a display. This CAD system permits system performance and design trade-offs to be evaluated without constructing a physical prototype of the device. The systems includes a series of modules which permit analysis of design trade-offs in terms of their visual impact on a viewer looking at a display.

  14. A novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical note.

    PubMed

    Abe, Yuichiro; Sato, Shigenobu; Kato, Koji; Hyakumachi, Takahiko; Yanagibashi, Yasushi; Ito, Manabu; Abumi, Kuniyoshi

    2013-10-01

    Augmented reality (AR) is an imaging technology by which virtual objects are overlaid onto images of real objects captured in real time by a tracking camera. This study aimed to introduce a novel AR guidance system called virtual protractor with augmented reality (VIPAR) to visualize a needle trajectory in 3D space during percutaneous vertebroplasty (PVP). The AR system used for this study comprised a head-mount display (HMD) with a tracking camera and a marker sheet. An augmented scene was created by overlaying the preoperatively generated needle trajectory path onto a marker detected on the patient using AR software, thereby providing the surgeon with augmented views in real time through the HMD. The accuracy of the system was evaluated by using a computer-generated simulation model in a spine phantom and also evaluated clinically in 5 patients. In the 40 spine phantom trials, the error of the insertion angle (EIA), defined as the difference between the attempted angle and the insertion angle, was evaluated using 3D CT scanning. Computed tomography analysis of the 40 spine phantom trials showed that the EIA in the axial plane significantly improved when VIPAR was used compared with when it was not used (0.96° ± 0.61° vs 4.34° ± 2.36°, respectively). The same held true for EIA in the sagittal plane (0.61° ± 0.70° vs 2.55° ± 1.93°, respectively). In the clinical evaluation of the AR system, 5 patients with osteoporotic vertebral fractures underwent VIPAR-guided PVP from October 2011 to May 2012. The postoperative EIA was evaluated using CT. The clinical results of the 5 patients showed that the EIA in all 10 needle insertions was 2.09° ± 1.3° in the axial plane and 1.98° ± 1.8° in the sagittal plane. There was no pedicle breach or leakage of polymethylmethacrylate. VIPAR was successfully used to assist in needle insertion during PVP by providing the surgeon with an ideal insertion point and needle trajectory through the HMD. The findings indicate

  15. Low-cost 3D systems: suitable tools for plant phenotyping.

    PubMed

    Paulus, Stefan; Behmann, Jan; Mahlein, Anne-Katrin; Plümer, Lutz; Kuhlmann, Heiner

    2014-02-14

    Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor.

  16. Low-Cost 3D Systems: Suitable Tools for Plant Phenotyping

    PubMed Central

    Paulus, Stefan; Behmann, Jan; Mahlein, Anne-Katrin; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor is able to replace an expensive laser scanner in many plant phenotyping scenarios. PMID:24534920

  17. 2D and 3D Mechanobiology in Human and Nonhuman Systems.

    PubMed

    Warren, Kristin M; Islam, Md Mydul; LeDuc, Philip R; Steward, Robert

    2016-08-31

    Mechanobiology involves the investigation of mechanical forces and their effect on the development, physiology, and pathology of biological systems. The human body has garnered much attention from many groups in the field, as mechanical forces have been shown to influence almost all aspects of human life ranging from breathing to cancer metastasis. Beyond being influential in human systems, mechanical forces have also been shown to impact nonhuman systems such as algae and zebrafish. Studies of nonhuman and human systems at the cellular level have primarily been done in two-dimensional (2D) environments, but most of these systems reside in three-dimensional (3D) environments. Furthermore, outcomes obtained from 3D studies are often quite different than those from 2D studies. We present here an overview of a select group of human and nonhuman systems in 2D and 3D environments. We also highlight mechanobiological approaches and their respective implications for human and nonhuman physiology.

  18. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    NASA Astrophysics Data System (ADS)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  19. Implementation of a fully 3D system model for brain SPECT with fan- beam-collimator OSEM reconstruction with 3D total variation regularization

    NASA Astrophysics Data System (ADS)

    Ye, Hongwei; Krol, Andrzej; Lipson, Edward D.; Lu, Yao; Xu, Yuesheng; Lee, Wei; Feiglin, David H.

    2007-03-01

    In order to improve tomographically reconstructed image quality, we have implemented a fully 3D reconstruction, using an ordered subsets expectation maximization (OSEM) algorithm for fan-beam collimator (FBC) SPECT, along with a volumetric system model-fan-volume system model (FVSM), a modified attenuation compensation, a 3D depth- and angle-dependent resolution and sensitivity correction, and a 3D total variation (TV) regularization. SPECT data were acquired in a 128x64 matrix, in 120 views with a circular orbit. The numerical Zubal brain phantom was used to simulate a FBC HMPAO Tc-99m brain SPECT scan, and a low noise and scatter-free projection dataset was obtained using the SimSET Monte Carlo package. A SPECT scan for a mini-Defrise phantom and brain HMPAO SPECT scans for five patients were acquired with a triple-head gamma camera (Triad 88) equipped with a low-energy high-resolution (LEHR) FBC. The reconstructed images, obtained using clinical filtered back projection (FBP), OSEM with a line-length system model (LLSM) and 3D TV regularization, and OSEM with FVSM and 3D TV regularization were quantitatively studied. Overall improvement in the image quality has been observed, including better axial and transaxial resolution, better integral uniformity, higher contrast-to-noise ration between the gray matter and the white matter, and better accuracy and lower bias in OSEM-FVSM, compared with OSEM-LLSM and clinical FBP.

  20. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.