Science.gov

Sample records for 3d display based

  1. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  2. 3D display based on parallax barrier with multiview zones.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wang, Jun

    2014-03-01

    A 3D display based on a parallax barrier with multiview zones is proposed. This display consists of a 2D display panel and a parallax barrier. The basic element of the parallax barrier has three narrow slits. They can show three columns of subpixels on the 2D display panel and form 3D pixels. The parallax barrier can provide multiview zones. In these multiview zones, the proposed 3D display can use a small number of views to achieve a high density of views. Therefore, the distance between views is the same as the conventional ones with more views. Considering the proposed display has fewer views, which bring more 3D pixels in the 3D images, the resolution and brightness will be higher than the conventional ones. A 12-view prototype of the proposed 3D display is developed, and it provides the same density of views as a conventional one with 28 views. Experimental results show the proposed display has higher resolution and brightness than the conventional one. The cross talk is also limited at a low level.

  3. Integral imaging based 3D display of holographic data.

    PubMed

    Yöntem, Ali Özgür; Onural, Levent

    2012-10-22

    We propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on an integral imaging based display setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated data or digitally acquired optical data. The method shows the connection between a hologram (diffraction pattern) and an elemental image set of the same 3D object. We showed three examples, one of which is the digitally captured optical diffraction tomography data of an epithelium cell. We obtained optical reconstructions with our integral imaging display setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement.

  4. Future of photorefractive based holographic 3D display

    NASA Astrophysics Data System (ADS)

    Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.

    2010-02-01

    The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.

  5. Special subpixel arrangement-based 3D display with high horizontal resolution.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wu, Fei

    2014-11-01

    A special subpixel arrangement-based 3D display is proposed. This display consists of a 2D display panel and a parallax barrier. On the 2D display panel, subpixels have a special arrangement, so they can redefine the formation of color pixels. This subpixel arrangement can bring about triple horizontal resolution for a conventional 2D display panel. Therefore, when these pixels are modulated by the parallax barrier, the 3D images formed also have triple horizontal resolution. A prototype of this display is developed. Experimental results show that this display with triple horizontal resolution can produce a better display effect than the conventional one.

  6. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  7. Front and rear projection autostereoscopic 3D displays based on lenticular sheets

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Zang, Shang-Fei; Qi, Lin

    2015-03-01

    A front projection autostereoscopic display is proposed. The display is composed of eight projectors and a 3D-imageguided screen which having a lenticular sheet and a retro-reflective diffusion screen. Based on the optical multiplexing and de-multiplexing, the optical functions of the 3D-image-guided screen are parallax image interlacing and viewseparating, which is capable of reconstructing 3D images without quality degradation from the front direction. The operating principle, optical design calculation equations and correction method of parallax images are given. A prototype of the front projection autostereoscopic display is developed, which enhances the brightness and 3D perceptions, and improves space efficiency. The performance of this prototype is evaluated by measuring the luminance and crosstalk distribution along the horizontal direction at the optimum viewing distance. We also propose a rear projection autostereoscopic display. The display consists of eight projectors, a projection screen, and two lenticular sheets. The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography. A prototype of the rear projection autostereoscopic display is developed. The normalized luminance distributions of viewing zones from the measurement are given. Results agree well with the designed values. The prototype presents high resolution and high brightness 3D images. The research has potential applications in some commercial entertainments and movies for the realistic 3D perceptions.

  8. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  9. The optimizations of CGH generation algorithms based on multiple GPUs for 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Yang, Dan; Liu, Juan; Zhang, Yingxi; Li, Xin; Wang, Yongtian

    2016-10-01

    Holographic display has been considered as a promising display technology. Currently, low-speed generation of holograms with big holographic data is one of crucial bottlenecks for three dimensional (3D) dynamic holographic display. To solve this problem, the acceleration method computation platform is presented based on look-up table point source method. The computer generated holograms (CGHs) acquisition is sped up by offline file loading and inline calculation optimization, where a pure phase CGH with gigabyte data is encoded to record an object with 10 MB sampling data. Both numerical simulation and optical experiment demonstrate that the CGHs with 1920×1080 resolution by the proposed method can be applied to the 3D objects reconstruction with high quality successfully. It is believed that the CGHs with huge data can be generated by the proposed method with high speed for 3D dynamic holographic display in near future.

  10. Standardization based on human factors for 3D display: performance characteristics and measurement methods

    NASA Astrophysics Data System (ADS)

    Uehara, Shin-ichi; Ujike, Hiroyasu; Hamagishi, Goro; Taira, Kazuki; Koike, Takafumi; Kato, Chiaki; Nomura, Toshio; Horikoshi, Tsutomu; Mashitani, Ken; Yuuki, Akimasa; Izumi, Kuniaki; Hisatake, Yuzo; Watanabe, Naoko; Umezu, Naoaki; Nakano, Yoshihiko

    2010-02-01

    We are engaged in international standardization activities for 3D displays. We consider that for a sound development of 3D displays' market, the standards should be based on not only mechanism of 3D displays, but also human factors for stereopsis. However, we think that there is no common understanding on what the 3D display should be and that the situation makes developing the standards difficult. In this paper, to understand the mechanism and human factors, we focus on a double image, which occurs in some conditions on an autostereoscopic display. Although the double image is generally considered as an unwanted effect, we consider that whether the double image is unwanted or not depends on the situation and that there are some allowable double images. We tried to classify the double images into the unwanted and the allowable in terms of the display mechanism and visual ergonomics for stereopsis. The issues associated with the double image are closely related to performance characteristics for the autostereoscopic display. We also propose performance characteristics, measurement and analysis methods to represent interocular crosstalk and motion parallax.

  11. Assessment of eye fatigue caused by 3D displays based on multimodal measurements.

    PubMed

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-09-04

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively.

  12. An interactive multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  13. Field lens multiplexing in holographic 3D displays by using Bragg diffraction based volume gratings

    NASA Astrophysics Data System (ADS)

    Fütterer, G.

    2016-11-01

    Applications, which can profit from holographic 3D displays, are the visualization of 3D data, computer-integrated manufacturing, 3D teleconferencing and mobile infotainment. However, one problem of holographic 3D displays, which are e.g. based on space bandwidth limited reconstruction of wave segments, is to realize a small form factor. Another problem is to provide a reasonable large volume for the user placement, which means to provide an acceptable freedom of movement. Both problems should be solved without decreasing the image quality of virtual and real object points, which are generated within the 3D display volume. A diffractive optical design using thick hologram gratings, which can be referred to as Bragg diffraction based volume gratings, can provide a small form factor and high definition natural viewing experience of 3D objects. A large collimated wave can be provided by an anamorphic backlight unit. The complex valued spatial light modulator add local curvatures to the wave field he is illuminated with. The modulated wave field is focused onto to the user plane by using a volume grating based field lens. Active type liquid crystal gratings provide 1D fine tracking of approximately +/- 8° deg. Diffractive multiplex has to be implemented for each color and for a set of focus functions providing coarse tracking. Boundary conditions of the diffractive multiplexing are explained. This is done in regards to the display layout and by using the coupled wave theory (CWT). Aspects of diffractive cross talk and its suppression will be discussed including longitudinal apodized volume gratings.

  14. Principle and characteristics of 3D display based on random source constructive interference.

    PubMed

    Li, Zhiyang

    2014-07-14

    The paper discusses the principle and characteristics of 3D display based on random source constructive interference (RSCI). The voxels of discrete 3D images are formed in the air via constructive interference of spherical light waves emitted by point light sources (PLSs) that are arranged at random positions to depress high order diffraction. The PLSs might be created by two liquid crystal panels sandwiched between two micro-lens arrays. The point spread function of the system revealed that it is able to reconstruct voxels with diffraction limited resolution over a large field width and depth. The high resolution was confirmed by the experiments. Theoretical analyses also shows that the system could provide a 3D image contrast and gray levels no less than that in liquid crystal panels. Compared with 2D display, it needs only additional depth information, which brings only about 30% data increment.

  15. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  16. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  17. Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements

    PubMed Central

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-01-01

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315

  18. Comprehensive evaluation of latest 2D/3D monitors and comparison to a custom-built 3D mirror-based display in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Wilhelm, Dirk; Reiser, Silvano; Kohn, Nils; Witte, Michael; Leiner, Ulrich; Mühlbach, Lothar; Ruschin, Detlef; Reiner, Wolfgang; Feussner, Hubertus

    2014-03-01

    Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation.

  19. 2D/3D switchable displays

    NASA Astrophysics Data System (ADS)

    Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.

    2006-02-01

    A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.

  20. Design of extended viewing zone at autostereoscopic 3D display based on diffusing optical element

    NASA Astrophysics Data System (ADS)

    Kim, Min Chang; Hwang, Yong Seok; Hong, Suk-Pyo; Kim, Eun Soo

    2012-03-01

    In this paper, to realize a non-glasses type 3D display as next step from the current glasses-typed 3D display, it is suggested that a viewing zone is designed for the 3D display using DOE (Diffusing Optical Element). Viewing zone of proposed method is larger than that of the current parallax barrier method or lenticular method. Through proposed method, it is shown to enable the expansion and adjustment of the area of viewing zone according to viewing distance.

  1. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography.

    PubMed

    Hongen Liao; Dohi, Takeyoshi; Nomura, Keisuke

    2011-11-01

    We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.

  2. A 360-degree floating 3D display based on light field regeneration.

    PubMed

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-06

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

  3. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  4. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc.

  5. Membrane-mirror-based display for viewing 2D and 3D images

    NASA Astrophysics Data System (ADS)

    McKay, Stuart; Mason, Steven; Mair, Leslie S.; Waddell, Peter; Fraser, Simon M.

    1999-05-01

    Stretchable Membrane Mirrors (SMMs) have been developed at the University of Strathclyde as a cheap, lightweight and variable focal length alternative to conventional fixed- curvature glass based optics. A SMM uses a thin sheet of aluminized polyester film which is stretched over a specially shaped frame, forming an airtight cavity behind the membrane. Removal of air from that cavity causes the resulting air pressure difference to force the membrane back into a concave shape. Controlling the pressure difference acting over the membrane now controls the curvature or f/No. of the mirror. Mirrors from 0.15-m to 1.2-m in diameter have been constructed at the University of Strathclyde. The use of lenses and mirrors to project real images in space is perhaps one of the simplest forms of 3D display. When using conventional optics however, there are severe financial restrictions on what size of image forming element may be used, hence the appeal of a SMM. The mirrors have been used both as image forming elements and directional screens in volumetric, stereoscopic and large format simulator displays. It was found that the use of these specular reflecting surfaces greatly enhances the perceived image quality of the resulting magnified display.

  6. Reproducibility of crosstalk measurements on active glasses 3D LCD displays based on temporal characterization

    NASA Astrophysics Data System (ADS)

    Tourancheau, Sylvain; Wang, Kun; Bułat, Jarosław; Cousseau, Romain; Janowski, Lucjan; Brunnström, Kjell; Barkowsky, Marcus

    2012-03-01

    Crosstalk is one of the main display-related perceptual factors degrading image quality and causing visual discomfort on 3D-displays. It causes visual artifacts such as ghosting effects, blurring, and lack of color fidelity which are considerably annoying and can lead to difficulties to fuse stereoscopic images. On stereoscopic LCD with shutter-glasses, crosstalk is mainly due to dynamic temporal aspects: imprecise target luminance (highly dependent on the combination of left-view and right-view pixel color values in disparity regions) and synchronization issues between shutter-glasses and LCD. These different factors influence largely the reproducibility of crosstalk measurements across laboratories and need to be evaluated in several different locations involving similar and differing conditions. In this paper we propose a fast and reproducible measurement procedure for crosstalk based on high-frequency temporal measurements of both display and shutter responses. It permits to fully characterize crosstalk for any right/left color combination and at any spatial position on the screen. Such a reliable objective crosstalk measurement method at several spatial positions is considered a mandatory prerequisite for evaluating the perceptual influence of crosstalk in further subjective studies.

  7. Residual lens effects in 2D mode of auto-stereoscopic lenticular-based switchable 2D/3D displays

    NASA Astrophysics Data System (ADS)

    Sluijter, M.; IJzerman, W. L.; de Boer, D. K. G.; de Zwart, S. T.

    2006-04-01

    We discuss residual lens effects in multi-view switchable auto-stereoscopic lenticular-based 2D/3D displays. With the introduction of a switchable lenticular, it is possible to switch between a 2D mode and a 3D mode. The 2D mode displays conventional content, whereas the 3D mode provides the sensation of depth to the viewer. The uniformity of a display in the 2D mode is quantified by the quality parameter modulation depth. In order to reduce the modulation depth in the 2D mode, birefringent lens plates are investigated analytically and numerically, by ray tracing. We can conclude that the modulation depth in the 2D mode can be substantially decreased by using birefringent lens plates with a perfect index match between lens material and lens plate. Birefringent lens plates do not disturb the 3D performance of a switchable 2D/3D display.

  8. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  9. Spatioangular Prefiltering for Multiview 3D Displays.

    PubMed

    Ramachandra, Vikas; Hirakawa, Keigo; Zwicker, Matthias; Nguyen, Truong

    2011-05-01

    In this paper, we analyze the reproduction of light fields on multiview 3D displays. A three-way interaction between the input light field signal (which is often aliased), the joint spatioangular sampling grids of multiview 3D displays, and the interview light leakage in modern multiview 3D displays is characterized in the joint spatioangular frequency domain. Reconstruction of light fields by all physical 3D displays is prone to light leakage, which means that the reconstruction low-pass filter implemented by the display is too broad in the angular domain. As a result, 3D displays excessively attenuate angular frequencies. Our analysis shows that this reduces sharpness of the images shown in the 3D displays. In this paper, stereoscopic image recovery is recast as a problem of joint spatioangular signal reconstruction. The combination of the 3D display point spread function and human visual system provides the narrow-band low-pass filter which removes spectral replicas in the reconstructed light field on the multiview display. The nonideality of this filter is corrected with the proposed prefiltering. The proposed light field reconstruction method performs light field antialiasing as well as angular sharpening to compensate for the nonideal response of the 3D display. The union of cosets approach which has been used earlier by others is employed here to model the nonrectangular spatioangular sampling grids on a multiview display in a generic fashion. We confirm the effectiveness of our approach in simulation and in physical hardware, and demonstrate improvement over existing techniques.

  10. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  11. Spectroradiometric characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    Spectroradiometric measurements have been made for the experimental characterization of the RGB channels of autostereoscopic 3D displays, giving results for different measurement angles with respect to the normal direction of the plane of the display. In the study, 2 different models of autostereoscopic 3D displays of different sizes and resolutions were used, making measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). From the measurements made, goniometric results were recorded for luminance contrast, and the fundamental hypotheses have been evaluated for the characterization of the displays: independence of the RGB channels and their constancy. The results show that the display with the lower angle variability in the contrast-ratio value and constancy of the chromaticity coordinates nevertheless presented the greatest additivity deviations with the measurement angle. For both displays, when the parameters evaluated were taken into account, lower angle variability consistently resulted in the 2D mode than in the 3D mode.

  12. Photorefractive Polymers for Updateable 3D Displays

    DTIC Science & Technology

    2010-02-24

    Final Performance Report 3. DATES COVERED (From - To) 01-01-2007 to 11-30-2009 4. TITLE AND SUBTITLE Photorefractive Polymers for Updateable 3D ...ABSTRACT During the tenure of this project a large area updateable 3D color display has been developed for the first time using a new co-polymer...photorefractive polymers have been demonstrated. Moreover, a 6 inch × 6 inch sample was fabricated demonstrating the feasibility of making large area 3D

  13. Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio

    NASA Astrophysics Data System (ADS)

    Zeng, Xiang-Yao; Zhou, Xiong-Tu; Guo, Tai-Liang; Yang, Lan; Chen, En-Guo; Zhang, Yong-Ai

    2017-04-01

    Autostereoscopic 3D-LED displays using parallax barriers have several advantages. However, conventional designs do not consider the black stripes of regular LED panels. These cause immeasurable crosstalk owing to excess light from adjacent sub-pixels separated by the panels. To reduce the crosstalk in large-scale displays, we design a barrier in which the black-stripe occupation ratio is defined to quantify the crosstalk level in the LED system. A prototype is assembled and analyzed based on a three-in-one pixel LED-chip panel for a dual-viewpoint display. The improved parallax barrier meets the design requirements and achieves a low crosstalk level. Simulation and experiment results verify the effectiveness of the crosstalk-reduced design.

  14. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  15. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  16. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  17. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  18. Depth-fused 3D imagery on an immaterial display.

    PubMed

    Lee, Cha; Diverdi, Stephen; Höllerer, Tobias

    2009-01-01

    We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The feasibility of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using one or two immaterial screens to create an unencumbered 3D visual that users can penetrate, examining the potential for direct walk-through and reach-through manipulation of the 3D scene. We evaluate the prototype system in formative and summative user studies and report the tolerance thresholds discovered for both tracking and projector errors.

  19. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    NASA Astrophysics Data System (ADS)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  20. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size. PMID:25961382

  1. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays.

    PubMed

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-05-07

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size.

  2. Multi-view 3D display using waveguides

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Chang-Kun

    2015-07-01

    We propose a multi-projection based multi-view 3D display system using an optical waveguide. The images from the projection units with the angle satisfying the total internal reflection (TIR) condition are incident on the waveguide and experience multiple reflections at the interface by the TIR. As a result of the multiple reflections in the waveguide, the projection distance in horizontal direction is effectively reduced to the thickness of the waveguide, and it is possible to implement the compact projection display system. By aligning the projection array in the entrance part of the waveguide, the multi-view 3D display system based on the multiple projectors with the minimized structure is realized. Viewing zones are generated by combining the waveguide projection system, a vertical diffuser, and a Fresnel lens. In the experimental setup, the feasibility of the proposed method is verified and a ten-view 3D display system with compact size in projection space is implemented.

  3. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  4. Measurement of Contrast Ratios for 3D Display

    DTIC Science & Technology

    2000-07-01

    stereoscopic, autostereoscopic , 3D , display ABSTRACT 3D image display devices have wide applications in medical and entertainment areas. Binocular (stereoscopic...and system crosstalk. In many 3D display systems viewer’ crosstalk is an important issue for good performance, especial in autostereoscopic display...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11343 TITLE: Measurement of Contrast Ratios for 3D Display

  5. Fabrication of Large-Scale Microlens Arrays Based on Screen Printing for Integral Imaging 3D Display.

    PubMed

    Zhou, Xiongtu; Peng, Yuyan; Peng, Rong; Zeng, Xiangyao; Zhang, Yong-Ai; Guo, Tailiang

    2016-09-14

    The low-cost large-scale fabrication of microlens arrays (MLAs) with precise alignment, great uniformity of focusing, and good converging performance are of great importance for integral imaging 3D display. In this work, a simple and effective method for large-scale polymer microlens arrays using screen printing has been successfully presented. The results show that the MLAs possess high-quality surface morphology and excellent optical performances. Furthermore, the microlens' shape and size, i.e., the diameter, the height, and the distance between two adjacent microlenses of the MLAs can be easily controlled by modifying the reflowing time and the size of open apertures of the screen. MLAs with the neighboring microlenses almost tangent can be achieved under suitable size of open apertures of the screen and reflowing time, which can remarkably reduce the color moiré patterns caused by the stray light between the blank areas of the MLAs in the integral imaging 3D display system, exhibiting much better reconstruction performance.

  6. 3D touchable holographic light-field display.

    PubMed

    Yamaguchi, Masahiro; Higashida, Ryo

    2016-01-20

    We propose a new type of 3D user interface: interaction with a light field reproduced by a 3D display. The 3D display used in this work reproduces a 3D light field, and a real image can be reproduced in midair between the display and the user. When using a finger to touch the real image, the light field from the display will scatter. Then, the 3D touch sensing is realized by detecting the scattered light by a color camera. In the experiment, the light-field display is constructed with a holographic screen and a projector; thus, a preliminary implementation of a 3D touch is demonstrated.

  7. 3D electrohydrodynamic simulation of electrowetting displays

    NASA Astrophysics Data System (ADS)

    Hsieh, Wan-Lin; Lin, Chi-Hao; Lo, Kuo-Lung; Lee, Kuo-Chang; Cheng, Wei-Yuan; Chen, Kuo-Ching

    2014-12-01

    The fluid dynamic behavior within a pixel of an electrowetting display (EWD) is thoroughly investigated through a 3D simulation. By coupling the electrohydrodynamic (EHD) force deduced from the Maxwell stress tensor with the laminar phase field of the oil-water dual phase, the complete switch processes of an EWD, including the break-up and the electrowetting stages in the switch-on process (with voltage) and the oil spreading in the switch-off process (without voltage), are successfully simulated. By considering the factor of the change in the apparent contact angle at the contact line, the electro-optic performance obtained from the simulation is found to agree well with its corresponding experiment. The proposed model is used to parametrically predict the effect of interfacial (e.g. contact angle of grid) and geometric (e.g. oil thickness and pixel size) properties on the defects of an EWD, such as oil dewetting patterns, oil overflow, and oil non-recovery. With the help of the defect analysis, a highly stable EWD is both experimentally realized and numerically analyzed.

  8. Reality and Surreality of 3-D Displays: Holodeck and Beyond

    DTIC Science & Technology

    2000-01-01

    Holodeck is the reality that significantly better 3D display systems are possible. Keywords: true 3D displays, multiplexed 2D display ( autostereoscopic ...displays still do not use them in their own offices. Thus, 3D approaches that are autostereoscopic (that is, no-head gear is required) are preferred. A...challenges noted throughout the aforegoing sections of this paper will be steadily overcome. True 3D , autostereoscopic (no head gear) monitors with usable

  9. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  10. Integral imaging-based large-scale full-color 3-D display of holographic data by using a commercial LCD panel.

    PubMed

    Dong, Xiao-Bin; Ai, Ling-Yu; Kim, Eun-Soo

    2016-02-22

    We propose a new type of integral imaging-based large-scale full-color three-dimensional (3-D) display of holographic data based on direct ray-optical conversion of holographic data into elemental images (EIs). In the proposed system, a 3-D scene is modeled as a collection of depth-sliced object images (DOIs), and three-color hologram patterns for that scene are generated by interfering each color DOI with a reference beam, and summing them all based on Fresnel convolution integrals. From these hologram patterns, full-color DOIs are reconstructed, and converted into EIs using a ray mapping-based direct pickup process. These EIs are then optically reconstructed to be a full-color 3-D scene with perspectives on the depth-priority integral imaging (DPII)-based 3-D display system employing a large-scale LCD panel. Experiments with a test video confirm the feasibility of the proposed system in the practical application fields of large-scale holographic 3-D displays.

  11. Design of monocular multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2001-06-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have developed a 3D HMD system using the monocular stereoscopic display. This paper shows that the 3D vision system using the monocular stereoscopic display and capturing camera builds a 3D virtual space for a telemanipulation using a captured real 3D image. In this paper, we propose the monocular stereoscopic 3D display and capturing camera for a tele- manipulation system. In addition, we describe the result of depth estimation using the multi-focus retinal images.

  12. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  13. Computational challenges of emerging novel true 3D holographic displays

    NASA Astrophysics Data System (ADS)

    Cameron, Colin D.; Pain, Douglas A.; Stanley, Maurice; Slinger, Christopher W.

    2000-11-01

    A hologram can produce all the 3D depth cues that the human visual system uses to interpret and perceive real 3D objects. As such it is arguably the ultimate display technology. Computer generated holography, in which a computer calculates a hologram that is then displayed using a highly complex modulator, combines the ultimate qualities of a traditional hologram with the dynamic capabilities of a computer display producing a true 3D real image floating in space. This technology is set to emerge over the next decade, potentially revolutionizing application areas such as virtual prototyping (CAD-CAM, CAID etc.), tactical information displays, data visualization and simulation. In this paper we focus on the computational challenges of this technology. We consider different classes of computational algorithms from true computer-generated holograms (CGH) to holographic stereograms. Each has different characteristics in terms of image qualities, computational resources required, total CGH information content, and system performance. Possible trade- offs will be discussed including reducing the parallax. The software and hardware architectures used to implement the CGH algorithms have many possible forms. Different schemes, from high performance computing architectures to graphics based cluster architectures will be discussed and compared. Assessment will be made of current and future trends looking forward to a practical dynamic CGH based 3D display.

  14. 3D optical see-through head-mounted display based augmented reality system and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenliang; Weng, Dongdong; Liu, Yue; Xiang, Li

    2015-07-01

    The combination of health and entertainment becomes possible due to the development of wearable augmented reality equipment and corresponding application software. In this paper, we implemented a fast calibration extended from SPAAM for an optical see-through head-mounted display (OSTHMD) which was made in our lab. During the calibration, the tracking and recognition techniques upon natural targets were used, and the spatial corresponding points had been set in dispersed and well-distributed positions. We evaluated the precision of this calibration, in which the view angle ranged from 0 degree to 70 degrees. Relying on the results above, we calculated the position of human eyes relative to the world coordinate system and rendered 3D objects in real time with arbitrary complexity on OSTHMD, which accurately matched the real world. Finally, we gave the degree of satisfaction about our device in the combination of entertainment and prevention of cervical vertebra diseases through user feedbacks.

  15. Optical characterization of different types of 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    All 3D displays have the same intrinsic method to induce depth perception. They provide different images in the left and right eye of the observer to obtain the stereoscopic effect. The three most common solutions already available on the market are active glass, passive glass and auto-stereoscopic 3D displays. The three types of displays are based on different physical principle (polarization, time selection or spatial emission) and consequently require different measurement instruments and techniques. In the proposed paper, we present some of these solutions and the technical characteristics that can be obtained to compare the displays. We show in particular that local and global measurements can be made in the three cases to access to different characteristics. We also discuss the new technologies currently under development and their needs in terms of optical characterization.

  16. 3D Display Calibration by Visual Pattern Analysis.

    PubMed

    Hwang, Hyoseok; Chang, Hyun Sung; Nam, Dongkyung; Kweon, In So

    2017-02-06

    Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2 s for computation; and is robust to noise, working well in the SNR regime as low as 6 dB.

  17. 3-D Imagery Cockpit Display Development

    DTIC Science & Technology

    1990-08-01

    display. is needed. Good information - (3) Change from pictorial gauges to difficult to interpret. word warnings. Display EGT & OIL indicators at all times...indicator. Popped CBs. Information to be changed : Comments: (5) Nothing needs to be changed . Great format. (2) Standardize colors. Display is good. Use all ...sense? Any suggestions for changes ? 6 Pilots Good. 5 Pilots Great! Don’t change the format. 1 Pilot Stores part great. 1 Pilot Provides all the necessary

  18. Interactive 3D display simulator for autostereoscopic smart pad

    NASA Astrophysics Data System (ADS)

    Choe, Yeong-Seon; Lee, Ho-Dong; Park, Min-Chul; Son, Jung-Young; Park, Gwi-Tae

    2012-06-01

    There is growing interest of displaying 3D images on a smart pad for entertainments and information services. Designing and realizing various types of 3D displays on the smart pad is not easy for costs and given time. Software simulation can be an alternative method to save and shorten the development. In this paper, we propose a 3D display simulator for autostereoscopic smart pad. It simulates light intensity of each view and crosstalk for smart pad display panels. Designers of 3D display for smart pad can interactively simulate many kinds of autostereoscopic displays interactively by changing parameters required for panel design. Crosstalk to reduce leakage of one eye's image into the image of the other eye, and light intensity for computing visual comfort zone are important factors in designing autostereoscopic display for smart pad. Interaction enables intuitive designs. This paper describes an interactive 3D display simulator for autostereoscopic smart pad.

  19. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  20. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  1. Irregular Grid Generation and Rapid 3D Color Display Algorithm

    SciTech Connect

    Wilson D. Chin, Ph.D.

    2000-05-10

    Computationally efficient and fast methods for irregular grid generation are developed to accurately characterize wellbore and fracture boundaries, and farfield reservoir boundaries, in oil and gas petroleum fields. Advanced reservoir simulation techniques are developed for oilfields described by such ''boundary conforming'' mesh systems. Very rapid, three-dimensional color display algorithms are also developed that allow users to ''interrogate'' 3D earth cubes using ''slice, rotate, and zoom'' functions. Based on expert system ideas, the new methods operate much faster than existing display methodologies and do not require sophisticated computer hardware or software. They are designed to operate with PC based applications.

  2. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  3. Will true 3d display devices aid geologic interpretation. [Mirage

    SciTech Connect

    Nelson, H.R. Jr.

    1982-04-01

    A description is given of true 3D display devices and techniques that are being evaluated in various research laboratories around the world. These advances are closely tied to the expected application of 3D display devices as interpretational tools for explorationists. 34 refs.

  4. 30-view projection 3D display

    NASA Astrophysics Data System (ADS)

    Huang, Junejei; Wang, Yuchang

    2015-03-01

    A 30-view auto-stereoscopic display using angle-magnifying screen is proposed. Small incident angle of Lamp-scanning from exit pupil of projection lens is magnified into large field of view on the observing side. The lamp-scanning is realized by the vibration of Galvano-mirror that synchronizing with the frame rate of the DMD and reflecting the laser illuminator to the scanning angles. To achieve 15-view, a 3-chip DLP projector with frame rate of 720 Hz is used. For one cycle of vibration of Galvano-mirror, steps of 0, 2, 4, 6, 8 10, 12, 14 are reflected on going-path and steps of 13, 11, 9, 7, 5, 3, 1 are reflected on returning path. A frame is divided into two half parts of odd lines and even lines for two views. For each view, 48 half frames per second are provided. A projection lens with aperture-relay module is used to double the lens aperture and separating the frame into two half parts of even and odd lines. After going through the Philips prism, three panels, the scanning 15 spots are doubled to 30 spots and emerge from the exit pupil of the projection lens. The exit 30 light spots from the projection lens are projected to 30 viewing zones by the anglemagnifying screen. A cabinet of rear projection with two folded mirrors is used because a projection lens of long throw distance is required.

  5. Auto-stereoscopic 3D displays with reduced crosstalk.

    PubMed

    Lee, Chulhee; Seo, Guiwon; Lee, Jonghwa; Han, Tae-hwan; Park, Jong Geun

    2011-11-21

    In this paper, we propose new auto-stereoscopic 3D displays that substantially reduce crosstalk. In general, it is difficult to eliminate crosstalk in auto-stereoscopic 3D displays. Ideally, the parallax barrier can eliminate crosstalk for a single viewer at the ideal position. However, due to variations in the viewing distance and the interpupillary distance, crosstalk is a problem in parallax barrier displays. In this paper, we propose 3-dimensional barriers, which can significantly reduce crosstalk.

  6. Development of a 3D pixel module for an ultralarge screen 3D display

    NASA Astrophysics Data System (ADS)

    Hashiba, Toshihiko; Takaki, Yasuhiro

    2004-10-01

    A large screen 2D display used at stadiums and theaters consists of a number of pixel modules. The pixel module usually consists of 8x8 or 16x16 LED pixels. In this study we develop a 3D pixel module in order to construct a large screen 3D display which is glass-free and has the motion parallax. This configuration for a large screen 3D display dramatically reduces the complexity of wiring 3D pixels. The 3D pixel module consists of several LCD panels, several cylindrical lenses, and one small PC. The LCD panels are slanted in order to differentiate the distances from same color pixels to the axis of the cylindrical lens so that the rays from the same color pixels are refracted into the different horizontal directions by the cylindrical lens. We constructed a prototype 3D pixel module, which consists of 8x4 3D pixels. The prototype module is designed to display 300 different patterns into different horizontal directions with the horizontal display angle pitch of 0.099 degree. The LCD panels are controlled by a small PC and the 3D image data is transmitted through the Gigabit Ethernet.

  7. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  8. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  9. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  10. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  11. Format for Interchange and Display of 3D Terrain Data

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Powell, Mark; Vona, Marsette; Norris, Jeffrey; Morrison, Jack

    2004-01-01

    Visible Scalable Terrain (ViSTa) is a software format for production, interchange, and display of three-dimensional (3D) terrain data acquired by stereoscopic cameras of robotic vision systems. ViSTa is designed to support scalability of data, accuracy of displayed terrain images, and optimal utilization of computational resources. In a ViSTa file, an area of terrain is represented, at one or more levels of detail, by coordinates of isolated points and/or vertices of triangles derived from a texture map that, in turn, is derived from original terrain images. Unlike prior terrain-image software formats, ViSTa includes provisions to ensure accuracy of texture coordinates. Whereas many such formats are based on 2.5-dimensional terrain models and impose additional regularity constraints on data, ViSTa is based on a 3D model without regularity constraints. Whereas many prior formats require external data for specifying image-data coordinate systems, ViSTa provides for the inclusion of coordinate-system data within data files. ViSTa admits highspeed loading and display within a Java program. ViSTa is designed to minimize file sizes and maximize compressibility and to support straightforward reduction of resolution to reduce file size for Internet-based distribution.

  12. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  13. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  14. Research of 3D display using anamorphic optics

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kenji; Honda, Toshio

    1997-05-01

    This paper describes the auto-stereoscopic display which can reconstruct more reality and viewer friendly 3-D image by increasing the number of parallaxes and giving motion parallax horizontally. It is difficult to increase number of parallaxes to give motion parallax to the 3-D image without reducing the resolution, because the resolution of display device is insufficient. The magnification and the image formation position can be selected independently in horizontal direction and the vertical direction by projecting between the display device and the 3-D image with the anamorphic optics. The anamorphic optics is an optics system with different magnification in horizontal direction and the vertical direction. It consists of the combination of cylindrical lenses with different focal length. By using this optics, even if we use a dynamic display such as liquid crystal display (LCD), it is possible to display the realistic 3-D image having motion parallax. Motion parallax is obtained by assuming width of the single parallax at the viewing position to be about the same size as the pupil diameter of viewer. In addition, because the focus depth of the 3-D image is deep in this method, conflict of accommodation and convergence is small, and natural 3-D image can be displayed.

  15. Rear-cross-lenticular 3D display without eyeglasses

    NASA Astrophysics Data System (ADS)

    Morishima, Hideki; Nose, Hiroyasu; Taniguchi, Naosato; Inoguchi, Kazutaka; Matsumura, Susumu

    1998-04-01

    We have developed a prototype 3D Display system without any eyeglasses, which we call `Rear Cross Lenticular 3D Display' (RCL3D), that is very compact and produces high quality 3D image. The RCL3D consists of a LCD panel, two lenticular lens sheets which run perpendicular to each other, a Checkered Pattern Mask and a backlight panel. On the LCD panel, a composite image which consists of alternately arranged horizontally striped images for right eye and left eye, is displayed. This composite image form is compatible with the field sequential stereoscopic image data. The light from backlight panel goes through the apertures of the Checkered Pattern Mask and illuminates the horizontal lines of images for right eye and left eye on LCD and goes to the right eye position and left eye position separately by the function of the two lenticular lens sheets. With this principle, the RCL3D shows 3D image to an observer without any eyeglasses. We applied simulation of viewing zone, using random ray tracing to the RCL3D and found that illuminated areas for right eye and left eye are separated clearly as series of alternate vertical stripes. We will present the prototype of the RCL3D (14.5', XGA) and simulation results.

  16. EEG-based cognitive load of processing events in 3D virtual worlds is lower than processing events in 2D displays.

    PubMed

    Dan, Alex; Reiner, Miriam

    2016-08-31

    Interacting with 2D displays, such as computer screens, smartphones, and TV, is currently a part of our daily routine; however, our visual system is built for processing 3D worlds. We examined the cognitive load associated with a simple and a complex task of learning paper-folding (origami) by observing 2D or stereoscopic 3D displays. While connected to an electroencephalogram (EEG) system, participants watched a 2D video of an instructor demonstrating the paper-folding tasks, followed by a stereoscopic 3D projection of the same instructor (a digital avatar) illustrating identical tasks. We recorded the power of alpha and theta oscillations and calculated the cognitive load index (CLI) as the ratio of the average power of frontal theta (Fz.) and parietal alpha (Pz). The results showed a significantly higher cognitive load index associated with processing the 2D projection as compared to the 3D projection; additionally, changes in the average theta Fz power were larger for the 2D conditions as compared to the 3D conditions, while alpha average Pz power values were similar for 2D and 3D conditions for the less complex task and higher in the 3D state for the more complex task. The cognitive load index was lower for the easier task and higher for the more complex task in 2D and 3D. In addition, participants with lower spatial abilities benefited more from the 3D compared to the 2D display. These findings have implications for understanding cognitive processing associated with 2D and 3D worlds and for employing stereoscopic 3D technology over 2D displays in designing emerging virtual and augmented reality applications.

  17. Instrument for 3D characterization of autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Prévoteau, J.; Chalençon-Piotin, S.; Debons, D.; Lucas, L.; Remion, Y.

    2011-03-01

    We now have numerous autostereoscopic displays, and it is mandatory to characterize them because it will allow to optimize their performances and to make efficient comparison between them. Therefore we need standards so we have to be able to quantify the quality of the viewer's perception. The purpose of the present paper is twofold; we first present a new instrument of characterization of the 3D perception on a given autostereoscopic display; then we propose a new way to realize an experimental protocol allowing to get a full characterization. This instrument will allow us to compare efficiently the different autostereoscopic displays but it will also validate practically the adequacy between the shooting and rendering geometries. In this aim, we are going to match a perceived scene with the virtual scene. It is hardly possible to determine the scene perceived by a viewer placed in front of an autostereoscopic display. Indeed if it may be executable on the pop-out, it is impossible on the depth effect because the depth of the virtual scene is set behind the screen. Therefore, we will have to use an optical illusion based on the deflection of light by a mirror to know the position which the viewer perceives some points of the virtual scene on an autostereoscopic display.

  18. Analysis of the real-time 3D display system based on the reconstruction of parallax rays

    NASA Astrophysics Data System (ADS)

    Yamada, Kenji; Takahashi, Hideya; Shimizu, Eiji

    2002-11-01

    Several types of auto-stereoscopic display systems have been developed. We also have developed a real-time color auto-stereoscopic display system using a reconstruction method of parallax rays. Our system consists of an optical element (such as lens array, a pinhole, a HOEs and so on), a spatial light modulator (SLM), and an image-processing unit. On our system, it is not probability to appear pseudoscopic image. The algorithm for solving this problem is processed in an image-processing unit. The resolution limitation of IP has studied by Hoshino, Burckhardt, and Okoshi. They designed the optimum width of the lens or the aperture. However, we cannot apply those theories to our system. Therefore, we consider not only the spatial frequency measured at the viewpoint but the performance of our system. In this paper, we describe an analysis of resolution for our system. The first we consider the spatial frequency along the depth and the horizontal direction respectively according to the geometrical optics and wave optics. The next we study the performance of our system. Especially, we esitmate the cross talk that the point sources from pixels on an SLM cause by considering to the geometrical optics and the wave optics.

  19. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  20. LED projection architectures for stereoscopic and multiview 3D displays

    NASA Astrophysics Data System (ADS)

    Meuret, Youri; Bogaert, Lawrence; Roelandt, Stijn; Vanderheijden, Jana; Avci, Aykut; De Smet, Herbert; Thienpont, Hugo

    2010-04-01

    LED-based projection systems have several interesting features: extended color-gamut, long lifetime, robustness and a fast turn-on time. However, the possibility to develop compact projectors remains the most important driving force to investigate LED projection. This is related to the limited light output of LED projectors that is a consequence of the relative low luminance of LEDs, compared to high intensity discharge lamps. We have investigated several LED projection architectures for the development of new 3D visualization displays. Polarization-based stereoscopic projection displays are often implemented using two identical projectors with passive polarizers at the output of their projection lens. We have designed and built a prototype of a stereoscopic projection system that incorporates the functionality of both projectors. The system uses high-resolution liquidcrystal- on-silicon light valves and an illumination system with LEDs. The possibility to add an extra LED illumination channel was also investigated for this optical configuration. Multiview projection displays allow the visualization of 3D images for multiple viewers without the need to wear special eyeglasses. Systems with large number of viewing zones have already been demonstrated. Such systems often use multiple projection engines. We have investigated a projection architecture that uses only one digital micromirror device and a LED-based illumination system to create multiple viewing zones. The system is based on the time-sequential modulation of the different images for each viewing zone and a special projection screen with micro-optical features. We analyzed the limitations of a LED-based illumination for the investigated stereoscopic and multiview projection systems and discuss the potential of a laser-based illumination.

  1. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention

  2. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  3. Analysis of temporal stability of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    An analysis has been made of the stability of the images generated by electronic autostereoscopic 3D displays, studying the time course of the photometric and colorimetric parameters. The measurements were made on the basis of the procedure recommended in the European guideline EN 61747-6 for the characterization of electronic liquid-crystal displays (LCD). The study uses 3 different models of autostereoscopic 3D displays of different sizes and numbers of pixels, taking the measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). For each of the displays, the time course is shown for the tristimulus values and the chromaticity coordinates in the XYZ CIE 1931 system and values from the time periods required to reach stable values of these parameters are presented. For the analysis of how the procedure recommended in the guideline EN 61747-6 for 2D displays influenced the results, and for the adaption of the procedure to the characterization of 3D displays, the experimental conditions of the standard procedure were varied, making the stability analysis in the two ocular channels (RE and LE) of the 3D mode and comparing the results with those corresponding to the 2D. The results of our study show that the stabilization time of a autostereoscopic 3D display with parallax barrier technology depends on the tristimulus value analysed (X, Y, Z) as well as on the presentation mode (2D, 3D); furthermore, it was found that whether the 3D mode is used depends on the ocular channel evaluated (RE, LE).

  4. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  5. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  6. 3D dynamic holographic display by modulating complex amplitude experimentally.

    PubMed

    Li, Xin; Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-09-09

    Complex amplitude modulation method is presented theoretically and performed experimentally for three-dimensional (3D) dynamic holographic display with reduced speckle using a single phase-only spatial light modulator. The determination of essential factors is discussed based on the basic principle and theory. The numerical simulations and optical experiments are performed, where the static and animated objects without refinement on the surfaces and without random initial phases are reconstructed successfully. The results indicate that this method can reduce the speckle in reconstructed images effectively; furthermore, it will not cause the internal structure in the reconstructed pixels. Since the complex amplitude modulation is based on the principle of phase-only hologram, it does not need the stringent alignment of pixels. This method can be used for high resolution imaging or measurement in various optical areas.

  7. High-definition 3D display for training applications

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy

    2010-04-01

    In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.

  8. User benefits of visualization with 3-D stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Wichansky, Anna M.

    1991-08-01

    The power of today''s supercomputers promises tremendous benefits to users in terms of productivity, creativity, and excitement in computing. A study of a stereoscopic display system for computer workstations was conducted with 20 users and third-party software developers, to determine whether 3-D stereo displays were perceived as better than flat, 2- 1/2D displays. Users perceived more benefits of 3-D stereo in applications such as molecular modeling and cell biology, which involved viewing of complex, abstract, amorphous objects. Users typically mentioned clearer visualization and better understanding of data, easier recognition of form and pattern, and more fun and excitement at work as the chief benefits of stereo displays. Human factors issues affecting the usefulness of stereo included use of 3-D glasses over regular eyeglasses, difficulties in group viewing, lack of portability, and need for better input devices. The future marketability of 3-D stereo displays would be improved by eliminating the need for users to wear equipment, reducing cost, and identifying markets where the abstract display value can be maximized.

  9. 3D head mount display with single panel

    NASA Astrophysics Data System (ADS)

    Wang, Yuchang; Huang, Junejei

    2014-09-01

    The head mount display for entertainment usually requires light weight. But in the professional application has more requirements. The image quality, field of view (FOV), color gamut, response and life time are considered items, too. A head mount display based on 1-chip TI DMD spatial light modulator is proposed. The multiple light sources and splitting images relay system are the major design tasks. The relay system images the object (DMD) into two image planes to crate binocular vision. The 0.65 inch 1080P DMD is adopted. The relay has a good performance which includes the doublet to reduce the chromatic aberration. Some spaces are reserved for placing the mirror and adjustable mechanism. The mirror splits the rays to the left and right image plane. These planes correspond to the eyepieces objects and image to eyes. A changeable mechanism provides the variable interpupillary distance (IPD). The folding optical path makes sure that the HMD center of gravity is close to the head and prevents the uncomfortable downward force being applied to head or orbit. Two RGB LED assemblies illuminate to the DMD in different angle. The light is highly collimated. The divergence angle is small enough such that one LED ray would only enters to the correct eyepiece. This switching is electronic controlled. There is no moving part to produce vibration and fast switch would be possible. Two LED synchronize with 3D video sync by a driving board which also controls the DMD. When the left eye image is displayed on DMD, the LED for left optical path turns on. Vice versa for right image and 3D scene is accomplished.

  10. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  11. Implementation of active-type Lamina 3D display system.

    PubMed

    Yoon, Sangcheol; Baek, Hogil; Min, Sung-Wook; Park, Soon-Gi; Park, Min-Kyu; Yoo, Seong-Hyeon; Kim, Hak-Rin; Lee, Byoungho

    2015-06-15

    Lamina 3D display is a new type of multi-layer 3D display, which utilizes the polarization state as a new dimension of depth information. Lamina 3D display system has advanced properties - to reduce the data amount representing 3D image, to be easily made using the conventional projectors, and to have a potential being applied to the many applications. However, the system might have some limitations in depth range and viewing angle due to the properties of the expressive volume components. In this paper, we propose the volume using the layers of switchable diffusers to implement the active-type Lamina 3D display system. Because the diffusing rate of the layers has no relation with the polarization state, the polarizer wheel is applied to the proposed system in purpose of making the sectioned image synchronized with the diffusing layer at the designated location. The imaging volume of the proposed system consists of five layers of polymer dispersed liquid crystal and the total size of the implemented volume is 24x18x12 mm3(3). The proposed system can achieve the improvements of viewing qualities such as enhanced depth expression and widened viewing angle.

  12. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  13. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  14. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema.

  15. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  16. True 3D displays for avionics and mission crewstations

    NASA Astrophysics Data System (ADS)

    Sholler, Elizabeth A.; Meyer, Frederick M.; Lucente, Mark E.; Hopper, Darrel G.

    1997-07-01

    3D threat projection has been shown to decrease the human recognition time for events, especially for a jet fighter pilot or C4I sensor operator when the advantage of realization that a hostile threat condition exists is the basis of survival. Decreased threat recognition time improves the survival rate and results from more effective presentation techniques, including the visual cue of true 3D (T3D) display. The concept of 'font' describes the approach adopted here, but whereas a 2D font comprises pixel bitmaps, a T3D font herein comprises a set of hologram bitmaps. The T3D font bitmaps are pre-computed, stored, and retrieved as needed to build images comprising symbols and/or characters. Human performance improvement, hologram generation for a T3D symbol font, projection requirements, and potential hardware implementation schemes are described. The goal is to employ computer-generated holography to create T3D depictions of a dynamic threat environments using fieldable hardware.

  17. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  18. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  19. Adipose- and bone marrow-derived mesenchymal stem cells display different osteogenic differentiation patterns in 3D bioactive glass-based scaffolds.

    PubMed

    Rath, Subha N; Nooeaid, Patcharakamon; Arkudas, Andreas; Beier, Justus P; Strobel, Leonie A; Brandl, Andreas; Roether, Judith A; Horch, Raymund E; Boccaccini, Aldo R; Kneser, Ulrich

    2016-10-01

    Mesenchymal stem cells can be isolated from a variety of different sources, each having their own peculiar merits and drawbacks. Although a number of studies have been conducted comparing these stem cells for their osteo-differentiation ability, these are mostly done in culture plastics. We have selected stem cells from either adipose tissue (ADSCs) or bone marrow (BMSCs) and studied their differentiation ability in highly porous three-dimensional (3D) 45S5 Bioglass®-based scaffolds. Equal numbers of cells were seeded onto 5 × 5 × 4 mm(3) scaffolds and cultured in vitro, with or without osteo-induction medium. After 2 and 4 weeks, the cell-scaffold constructs were analysed for cell number, cell spreading, viability, alkaline phosphatase activity and osteogenic gene expression. The scaffolds with ADSCs displayed osteo-differentiation even without osteo-induction medium; however, with osteo-induction medium osteogenic differentiation was further increased. In contrast, the scaffolds with BMSCs showed no osteo-differentiation without osteo-induction medium; after application of osteo-induction medium, osteo-differentiation was confirmed, although lower than in scaffolds with ADSCs. In general, stem cells in 3D bioactive glass scaffolds differentiated better than cells in culture plastics with respect to their ALP content and osteogenic gene expression. In summary, 45S5 Bioglass-based scaffolds seeded with ADSCs are well-suited for possible bone tissue-engineering applications. Induction of osteogenic differentiation appears unnecessary prior to implantation in this specific setting. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Data acquirement and remodeling on volumetric 3D emissive display system

    NASA Astrophysics Data System (ADS)

    Yao, Yi; Liu, Xu; Lin, Yuanfang; Zhang, Huangzhu; Zhang, Xiaojie; Liu, Xiangdong

    2005-01-01

    Since present display technology is projecting 3D to 2D, people's eyes are deceived by the loss of spatial data. So it's a revolution for human vision to develop a real 3D display device. The monitor is based on emissive pad with 64*256 LED array. When rotated at a frequency of 10 Hertz, it shows real 3D images with pixels at their exact positions. The article presents a procedure that the software possesses 3D object and converts to volumetric 3D formatted data for this system. For simulating the phenomenon on PC, it also presents a program remodels the object based on OpenGL. An algorithm for faster processing and optimizing rendering speed is also given. The monitor provides real 3D scenes with free visual angle. It can be expected that the revolution will bring a strike on modern monitors and will lead to a new world for display technology.

  1. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  2. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  3. New approach on calculating multiview 3D crosstalk for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Jung, Sung-Min; Lee, Kyeong-Jin; Kang, Ji-Na; Lee, Seung-Chul; Lim, Kyoung-Moon

    2012-03-01

    In this study, we suggest a new concept of 3D crosstalk for auto-stereoscopic displays and obtain 3D crosstalk values of several multi-view systems based on the suggested definition. First, we measure the angular dependencies of the luminance for auto-stereoscopic displays under various test patterns corresponding to each view of a multi-view system and then calculate the 3D crosstalk based on our new definition with respect to the measured luminance profiles. Our new approach gives just a single 3D crosstalk value for single device without any ambiguity and shows similar order of values to the conventional stereoscopic displays. These results are compared with the conventional 3D crosstalk values of selected auto-stereoscopic displays such as 4-view and 9-view systems. From the result, we believe that this new approach is very useful for controlling 3D crosstalk values of the 3D displays manufacturing and benchmarking of the 3D performances among the various auto-stereoscopic displays.

  4. A novel time-multiplexed autostereoscopic multiview full resolution 3D display

    NASA Astrophysics Data System (ADS)

    Liou, Jian-Chiun; Chen, Fu-Hao

    2012-03-01

    Many people believe that in the future, autostereoscopic 3D displays will become a mainstream display type. Achievement of higher quality 3D images requires both higher panel resolution and more viewing zones. Consequently, the transmission bandwidth of the 3D display systems involves enormous amounts of data transfer. We propose and experimentally demonstrate a novel time-multiplexed autostereoscopic multi-view full resolution 3D display based on the lenticular lens array in association with the control of the active dynamic LED backlight. The lenticular lenses of the lens array optical system receive the light and deflect the light into each viewing zone in a time sequence. The crosstalk under different observation scanning angles is showed, including the cases of 4-views field scanning. The crosstalk of any view zones is about 5% respectively; the results are better than other 3D type.

  5. Controllable liquid crystal gratings for an adaptive 2D/3D auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Zhang, Y. A.; Jin, T.; He, L. C.; Chu, Z. H.; Guo, T. L.; Zhou, X. T.; Lin, Z. X.

    2017-02-01

    2D/3D switchable, viewpoint controllable and 2D/3D localizable auto-stereoscopic displays based on controllable liquid crystal gratings are proposed in this work. Using the dual-layer staggered structure on the top substrate and bottom substrate as driven electrodes within a liquid crystal cell, the ratio between transmitting region and shielding region can be selectively controlled by the corresponding driving circuit, which indicates that 2D/3D switch and 3D video sources with different disparity images can reveal in the same auto-stereoscopic display system. Furthermore, the controlled region in the liquid crystal gratings presents 3D model while other regions maintain 2D model in the same auto-stereoscopic display by the corresponding driving circuit. This work demonstrates that the controllable liquid crystal gratings have potential applications in the field of auto-stereoscopic display.

  6. Stereoscopic 3D display with color interlacing improves perceived depth.

    PubMed

    Kim, Joohwan; Johnson, Paul V; Banks, Martin S

    2014-12-29

    Temporal interlacing is a method for presenting stereoscopic 3D content whereby the two eyes' views are presented at different times and optical filtering selectively delivers the appropriate view to each eye. This approach is prone to distortions in perceived depth because the visual system can interpret the temporal delay between binocular views as spatial disparity. We propose a novel color-interlacing display protocol that reverses the order of binocular presentation for the green primary but maintains the order for the red and blue primaries: During the first sub-frame, the left eye sees the green component of the left-eye view and the right eye sees the red and blue components of the right-eye view, and vice versa during the second sub-frame. The proposed method distributes the luminance of each eye's view more evenly over time. Because disparity estimation is based primarily on luminance information, a more even distribution of luminance over time should reduce depth distortion. We conducted a psychophysical experiment to test these expectations and indeed found that less depth distortion occurs with color interlacing than temporal interlacing.

  7. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  8. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  9. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    PubMed Central

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-01-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371

  10. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-03-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.

  11. Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array.

    PubMed

    Kim, Do-Hyeong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Jeong, Ji-Seong; Lee, Jae-Won; Kim, Kyung-Ah; Kim, Nam; Yoo, Kwan-Hee

    2013-12-01

    This paper proposes an open computer language (OpenCL) parallel processing method to generate the elemental image arrays (EIAs) for hexagonal lens array from a three-dimensional (3D) object such as a volume data. Hexagonal lens array has a higher fill factor compared to the rectangular lens array case; however, each pixel of an elemental image should be determined to belong to the single hexagonal lens. Therefore, generation for the entire EIA requires very large computations. The proposed method reduces processing time for the EIAs for a given hexagonal lens array. By using the proposed image space parallel processing (ISPP) method, it can enhance the processing speed that generates the 3D display of real-time interactive integral imaging for hexagonal lens array. In our experiment, we implemented the EIAs for hexagonal lens array in real-time and obtained a good processing time for a large of volume data for multiple cases of lens arrays.

  12. Optimal 3D Viewing with Adaptive Stereo Displays for Advanced Telemanipulation

    NASA Technical Reports Server (NTRS)

    Lee, S.; Lakshmanan, S.; Ro, S.; Park, J.; Lee, C.

    1996-01-01

    A method of optimal 3D viewing based on adaptive displays of stereo images is presented for advanced telemanipulation. The method provides the viewer with the capability of accurately observing a virtual 3D object or local scene of his/her choice with minimum distortion.

  13. Real-time hardware for a new 3D display

    NASA Astrophysics Data System (ADS)

    Kaufmann, B.; Akil, M.

    2006-02-01

    We describe in this article a new multi-view auto-stereoscopic display system with a real time architecture to generate images of n different points of view of a 3D scene. This architecture generates all the different points of view with only one generation process, the different pictures are not generated independently but all at the same time. The architecture generates a frame buffer that contains all the voxels with their three dimensions and regenerates the different pictures on demand from this frame buffer. The need of memory is decreased because there is no redundant information in the buffer.

  14. Stereopsis has the edge in 3-D displays

    NASA Astrophysics Data System (ADS)

    Piantanida, T. P.

    The results of studies conducted at SRI International to explore differences in image requirements for depth and form perception with 3-D displays are presented. Monocular and binocular stabilization of retinal images was used to separate form and depth perception and to eliminate the retinal disparity input to stereopsis. Results suggest that depth perception is dependent upon illumination edges in the retinal image that may be invisible to form perception, and that the perception of motion-in-depth may be inhibited by form perception, and may be influenced by subjective factors such as ocular dominance and learning.

  15. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen.

  16. Visual discomfort caused by color asymmetry in 3D displays

    NASA Astrophysics Data System (ADS)

    Chen, Zaiqing; Huang, Xiaoqiao; Tai, Yonghan; Shi, Junsheng; Yun, Lijun

    2016-10-01

    Color asymmetry is a common phenomenon in 3D displays, which can cause serious visual discomfort. To ensure safe and comfortable stereo viewing, the color difference between the left and right eyes should not exceed a threshold value, named comfortable color difference limit (CCDL). In this paper, we have experimentally measured the CCDL for five sample color points which were selected from the 1976 CIE u'v' chromaticity diagram. By human observers viewing brief presentations of color asymmetry image pairs, a psychophysical experiment is conducted. As the color asymmetry image pairs, left and right circular patches are horizontally adjusted on image pixels with five levels of disparities: 0, ±60, ±120 arc minutes, along six color directions. The experimental results showed that CCDLs for each sample point varied with the level of disparity and color direction. The minimum of CCDL is 0.019Δu' v' , and the maximum of CCDL is 0.133 Δu' v'. The database collected in this study might help 3D system design and 3D content creation.

  17. Fast-response liquid-crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Liu, Yifan; Ren, Hongwen; Xu, Su; Li, Yan; Wu, Shin-Tson

    2014-02-01

    Three-dimensional (3D) display has become an increasingly important technology trend for information display applications. Dozens of different 3D display solutions have been proposed. The autostereoscopic 3D display based on lenticular microlens array is a promising approach, and fast-switching microlens array enables this system to display both 3D and conventional 2D images. Here we report two different fast-response microlens array designs. The first one is a blue phase liquid crystal lens driven by the Pedot: PSS resistive film electrodes. This BPLC lens exhibits several attractive features, such as polarization insensitivity, fast response time, simple driving scheme, and relatively low driving voltage, as compared to other BPLC lens designs. The second lens design has a double-layered structure. The first layer is a polarization dependent polymer microlens array, and the second layer is a thin twisted-nematic (TN) liquid crystal cell. When the TN cell is switched on/off, the traversing light through the polymeric lens array is either focused or defocused, so that 2D/3D images are displayed correspondingly. This lens design has low driving voltage, fast response time, and simple driving scheme. Simulation and experiment demonstrate that the performance of both switchable lenses meet the requirement of 3D display system design.

  18. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  19. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  20. Air-touch interaction system for integral imaging 3D display

    NASA Astrophysics Data System (ADS)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  1. A diffuser-based three-dimensional measurement of polarization-dependent scattering characteristics of optical films for 3D-display applications.

    PubMed

    Kim, Dae-Yeon; Seo, Jong-Wook

    2015-01-26

    We propose an accurate and easy-to-use three-dimensional measurement method using a diffuser plate to analyze the scattering characteristics of optical films. The far-field radiation pattern of light scattered by the optical film is obtained from the illuminance pattern created on the diffuser plate by the light. A mathematical model and calibration methods were described, and the results were compared with those obtained by a direct measurement using a luminance meter. The new method gave very precise three-dimensional polarization-dependent scattering characteristics of scattering polarizer films, and it can play an effective role in developing high performance polarization-selective screens for 3D display applications.

  2. Recent research results in stereo 3-D pictorial displays at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.

    1990-01-01

    Recent results from a NASA-Langley program which addressed stereo 3D pictorial displays from a comprehensive standpoint are reviewed. The program dealt with human factors issues and display technology aspects, as well as flight display applications. The human factors findings include addressing a fundamental issue challenging the application of stereoscopic displays in head-down flight applications, with the determination that stereoacuity is unaffected by the short-term use of stereo 3D displays. While stereoacuity has been a traditional measurement of depth perception abilities, it is a measure of relative depth, rather than actual depth (absolute depth). Therefore, depth perception effects based on size and distance judgments and long-term stereo exposure remain issues to be investigated. The applications of stereo 3D to pictorial flight displays within the program have repeatedly demonstrated increases in pilot situational awareness and task performance improvements. Moreover, these improvements have been obtained within the constraints of the limited viewing volume available with conventional stereo displays. A number of stereo 3D pictorial display applications are described, including recovery from flight-path offset, helicopter hover, and emulated helmet-mounted display.

  3. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  4. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  5. Evaluation of stereoscopic 3D displays for image analysis tasks

    NASA Astrophysics Data System (ADS)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  6. Scalable 3D GIS environment managed by 3D-XML-based modeling

    NASA Astrophysics Data System (ADS)

    Shi, Beiqi; Rui, Jianxun; Chen, Neng

    2008-10-01

    Nowadays, the namely 3D GIS technologies become a key factor in establishing and maintaining large-scale 3D geoinformation services. However, with the rapidly increasing size and complexity of the 3D models being acquired, a pressing needed for suitable data management solutions has become apparent. This paper outlines that storage and exchange of geospatial data between databases and different front ends like 3D models, GIS or internet browsers require a standardized format which is capable to represent instances of 3D GIS models, to minimize loss of information during data transfer and to reduce interface development efforts. After a review of previous methods for spatial 3D data management, a universal lightweight XML-based format for quick and easy sharing of 3D GIS data is presented. 3D data management based on XML is a solution meeting the requirements as stated, which can provide an efficient means for opening a new standard way to create an arbitrary data structure and share it over the Internet. To manage reality-based 3D models, this paper uses 3DXML produced by Dassault Systemes. 3DXML uses opening XML schemas to communicate product geometry, structure and graphical display properties. It can be read, written and enriched by standard tools; and allows users to add extensions based on their own specific requirements. The paper concludes with the presentation of projects from application areas which will benefit from the functionality presented above.

  7. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  8. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  9. Dual side transparent OLED 3D display using Gabor super-lens

    NASA Astrophysics Data System (ADS)

    Chestak, Sergey; Kim, Dae-Sik; Cho, Sung-Woo

    2015-03-01

    We devised dual side transparent 3D display using transparent OLED panel and two lenticular arrays. The OLED panel is sandwiched between two parallel confocal lenticular arrays, forming Gabor super-lens. The display provides dual side stereoscopic 3D imaging and floating image of the object, placed behind it. The floating image can be superimposed with the displayed 3D image. The displayed autostereoscopic 3D images are composed of 4 views, each with resolution 64x90 pix.

  10. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  11. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  12. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  13. Synthesis and display of dynamic holographic 3D scenes with real-world objects.

    PubMed

    Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea; Näsänen, Risto; Naughton, Thomas J; Ferraro, Pietro

    2010-04-26

    A 3D scene is synthesized combining multiple optically recorded digital holograms of different objects. The novel idea consists of compositing moving 3D objects in a dynamic 3D scene using a process that is analogous to stop-motion video. However in this case the movie has the exciting attribute that it can be displayed and observed in 3D. We show that 3D dynamic scenes can be projected as an alternative to complicated and heavy computations needed to generate realistic-looking computer generated holograms. The key tool for creating the dynamic action is based on a new concept that consists of a spatial, adaptive transformation of digital holograms of real-world objects allowing full control in the manipulation of the object's position and size in a 3D volume with very high depth-of-focus. A pilot experiment to evaluate how viewers perceive depth in a conventional single-view display of the dynamic 3D scene has been performed.

  14. Optical rotation compensation for a holographic 3D display with a 360 degree horizontal viewing zone.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2016-10-20

    A method for a continuous optical rotation compensation in a time-division-based holographic three-dimensional (3D) display with a rotating mirror is presented. Since the coordinate system of wavefronts after the mirror reflection rotates about the optical axis along with the rotation angle, compensation or cancellation is absolutely necessary to fix the reconstructed 3D object. In this study, we address this problem by introducing an optical image rotator based on a right-angle prism that rotates synchronously with the rotating mirror. The optical and continuous compensation reduces the occurrence of duplicate images, which leads to the improvement of the quality of reconstructed images. The effect of the optical rotation compensation is experimentally verified and a demonstration of holographic 3D display with the optical rotation compensation is presented.

  15. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.

  16. Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.

    PubMed

    Saveljev, Vladimir; Kim, Sung-Kyu

    2015-10-05

    The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.

  17. Evaluation of passive polarized stereoscopic 3D display for visual & mental fatigues.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Mumtaz, Wajid; Badruddin, Nasreen; Kamel, Nidal

    2015-01-01

    Visual and mental fatigues induced by active shutter stereoscopic 3D (S3D) display have been reported using event-related brain potentials (ERP). An important question, that is whether such effects (visual & mental fatigues) can be found in passive polarized S3D display, is answered here. Sixty-eight healthy participants are divided into 2D and S3D groups and subjected to an oddball paradigm after being exposed to S3D videos with passive polarized display or 2D display. The age and fluid intelligence ability of the participants are controlled between the groups. ERP results do not show any significant differences between S3D and 2D groups to find the aftereffects of S3D in terms of visual and mental fatigues. Hence, we conclude that passive polarized S3D display technology may not induce visual and/or mental fatigue which may increase the cognitive load and suppress the ERP components.

  18. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.

  19. Characteristics measurement methodology of the large-size autostereoscopic 3D LED display

    NASA Astrophysics Data System (ADS)

    An, Pengli; Su, Ping; Zhang, Changjie; Cao, Cong; Ma, Jianshe; Cao, Liangcai; Jin, Guofan

    2014-11-01

    Large-size autostereoscopic 3D LED displays are commonly used in outdoor or large indoor space, and have the properties of long viewing distance and relatively low light intensity at the viewing distance. The instruments used to measure the characteristics (crosstalk, inconsistency, chromatic dispersion, etc.) of the displays should have long working distance and high sensitivity. In this paper, we propose a methodology for characteristics measurement based on a distribution photometer with a working distance of 5.76m and the illumination sensitivity of 0.001 mlx. A display panel holder is fabricated and attached on the turning stage of the distribution photometer. Specific test images are loaded on the display separately, and the luminance data at the distance of 5.76m to the panel are measured. Then the data are transformed into the light intensity at the optimum viewing distance. According to definitions of the characteristics of the 3D displays, the crosstalk, inconsistency, chromatic dispersion could be calculated. The test results and analysis of the characteristics of an autostereoscopic 3D LED display are proposed.

  20. A new way to characterize autostereoscopic 3D displays using Fourier optics instrument

    NASA Astrophysics Data System (ADS)

    Boher, P.; Leroux, T.; Bignon, T.; Collomb-Patton, V.

    2009-02-01

    Auto-stereoscopic 3D displays offer presently the most attractive solution for entertainment and media consumption. Despite many studies devoted to this type of technology, efficient characterization methods are still missing. We present here an innovative optical method based on high angular resolution viewing angle measurements with Fourier optics instrument. This type of instrument allows measuring the full viewing angle aperture of the display very rapidly and accurately. The system used in the study presents a very high angular resolution below 0.04 degree which is mandatory for this type of characterization. We can predict from the luminance or color viewing angle measurements of the different views of the 3D display what will be seen by an observer at any position in front of the display. Quality criteria are derived both for 3D and standard properties at any observer position and Qualified Stereo Viewing Space (QSVS) is determined. The use of viewing angle measurements at different locations on the display surface during the observer computation gives more realistic estimation of QSVS and ensures its validity for the entire display surface. Optimum viewing position, viewing freedom, color shifts and standard parameters are also quantified. Simulation of the moire issues can be made leading to a better understanding of their origin.

  1. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  2. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  3. Long-range 3D display using a collimated multi-layer display.

    PubMed

    Park, Soon-Gi; Yamaguchi, Yuta; Nakamura, Junya; Lee, Byoungho; Takaki, Yasuhiro

    2016-10-03

    We propose a long-range three-dimensional (3D) display using a collimated optics with multi-plane configuration. By using a spherical screen and a collimating lens, users observe the collimated image on the spherical screen, which simulates an image plane located at optical infinity. By combining and modulating overlapped multi-plane images, the observed image is located at desired depth position within the volume of multiple planes. The feasibility of the system is demonstrated by an experimental system composed of a planar and a spherical screen with a collimating lens. In addition, accommodation properties of the proposed system are demonstrated according to the depth modulation method.

  4. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  5. Autonomic nervous system responses can reveal visual fatigue induced by 3D displays.

    PubMed

    Kim, Chi Jung; Park, Sangin; Won, Myeung Ju; Whang, Mincheol; Lee, Eui Chul

    2013-09-26

    Previous research has indicated that viewing 3D displays may induce greater visual fatigue than viewing 2D displays. Whether viewing 3D displays can evoke measureable emotional responses, however, is uncertain. In the present study, we examined autonomic nervous system responses in subjects viewing 2D or 3D displays. Autonomic responses were quantified in each subject by heart rate, galvanic skin response, and skin temperature. Viewers of both 2D and 3D displays showed strong positive correlations with heart rate, which indicated little differences between groups. In contrast, galvanic skin response and skin temperature showed weak positive correlations with average difference between viewing 2D and 3D. We suggest that galvanic skin response and skin temperature can be used to measure and compare autonomic nervous responses in subjects viewing 2D and 3D displays.

  6. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  7. Autostereoscopic 3D flat panel display using an LCD-pixel-associated parallax barrier

    NASA Astrophysics Data System (ADS)

    Chen, En-guo; Guo, Tai-liang

    2014-05-01

    This letter reports an autostereoscopic three-dimensional (3D) flat panel display system employing a newly designed LCD-pixel-associated parallax barrier (LPB). The barrier's parameters can be conveniently determined by the LCD pixels and can help to greatly simplify the conventional design. The optical system of the proposed 3D display is built and simulated to verify the design. For further experimental demonstration, a 508-mm autostereoscopic 3D display prototype is developed and it presents good stereoscopic images. Experimental results agree well with the simulation, which reveals a strong potential for 3D display applications.

  8. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  9. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  10. Polymeric-lens-embedded 2D/3D switchable display with dramatically reduced crosstalk.

    PubMed

    Zhu, Ruidong; Xu, Su; Hong, Qi; Wu, Shin-Tson; Lee, Chiayu; Yang, Chih-Ming; Lo, Chang-Cheng; Lien, Alan

    2014-03-01

    A two-dimensional/three-dimensional (2D/3D) display system is presented based on a twisted-nematic cell integrated polymeric microlens array. This device structure has the advantages of fast response time and low operation voltage. The crosstalk of the system is analyzed in detail and two approaches are proposed to reduce the crosstalk: a double lens system and the prism approach. Illuminance distribution analysis proves these two approaches can dramatically reduce crosstalk, thus improving image quality.

  11. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  12. Multiplexing encoding method for full-color dynamic 3D holographic display.

    PubMed

    Xue, Gaolei; Liu, Juan; Li, Xin; Jia, Jia; Zhang, Zhao; Hu, Bin; Wang, Yongtian

    2014-07-28

    The multiplexing encoding method is proposed and demonstrated for reconstructing colorful images accurately by using single phase-only spatial light modulator (SLM). It will encode the light waves at different wavelengths into one pure-phase hologram at the same time based on the analytic formulas. The three-dimensional (3D) images can be reconstructed clearly when the light waves at different wavelengths are incident into the encoding hologram. Numerical simulations and optical experiments for 2D and 3D colorful images are performed. The results show that the colorful reconstructed images with high quality are achieved successfully. The proposed multiplexing method is a simple and fast encoding approach and the size of the system is small and compact. It is expected to be used for realizing full-color 3D holographic display in future.

  13. Compact multi-projection 3D display system with light-guide projection.

    PubMed

    Lee, Chang-Kun; Park, Soon-gi; Moon, Seokil; Hong, Jong-Young; Lee, Byoungho

    2015-11-02

    We propose a compact multi-projection based multi-view 3D display system using an optical light-guide, and perform an analysis of the characteristics of the image for distortion compensation via an optically equivalent model of the light-guide. The projected image traveling through the light-guide experiences multiple total internal reflections at the interface. As a result, the projection distance in the horizontal direction is effectively reduced to the thickness of the light-guide, and the projection part of the multi-projection based multi-view 3D display system is minimized. In addition, we deduce an equivalent model of such a light-guide to simplify the analysis of the image distortion in the light-guide. From the equivalent model, the focus of the image is adjusted, and pre-distorted images for each projection unit are calculated by two-step image rectification in air and the material. The distortion-compensated view images are represented on the exit surface of the light-guide when the light-guide is located in the intended position. Viewing zones are generated by combining the light-guide projection system, a vertical diffuser, and a Fresnel lens. The feasibility of the proposed method is experimentally verified and a ten-view 3D display system with a minimized structure is implemented.

  14. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  15. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed.

  16. Application of a 3D volumetric display for radiation therapy treatment planning I: quality assurance procedures.

    PubMed

    Gong, Xing; Kirk, Michael Collins; Napoli, Josh; Stutsman, Sandy; Zusag, Tom; Khelashvili, Gocha; Chu, James

    2009-07-17

    To design and implement a set of quality assurance tests for an innovative 3D volumetric display for radiation treatment planning applications. A genuine 3D display (Perspecta Spatial 3D, Actuality-Systems Inc., Bedford, MA) has been integrated with the Pinnacle TPS (Philips Medical Systems, Madison WI), for treatment planning. The Perspecta 3D display renders a 25 cm diameter volume that is viewable from any side, floating within a translucent dome. In addition to displaying all 3D data exported from Pinnacle, the system provides a 3D mouse to define beam angles and apertures and to measure distance. The focus of this work is the design and implementation of a quality assurance program for 3D displays and specific 3D planning issues as guided by AAPM Task Group Report 53. A series of acceptance and quality assurance tests have been designed to evaluate the accuracy of CT images, contours, beams, and dose distributions as displayed on Perspecta. Three-dimensional matrices, rulers and phantoms with known spatial dimensions were used to check Perspecta's absolute spatial accuracy. In addition, a system of tests was designed to confirm Perspecta's ability to import and display Pinnacle data consistently. CT scans of phantoms were used to confirm beam field size, divergence, and gantry and couch angular accuracy as displayed on Perspecta. Beam angles were verified through Cartesian coordinate system measurements and by CT scans of phantoms rotated at known angles. Beams designed on Perspecta were exported to Pinnacle and checked for accuracy. Dose at sampled points were checked for consistency with Pinnacle and agreed within 1% or 1 mm. All data exported from Pinnacle to Perspecta was displayed consistently. The 3D spatial display of images, contours, and dose distributions were consistent with Pinnacle display. When measured by the 3D ruler, the distances between any two points calculated using Perspecta agreed with Pinnacle within the measurement error.

  17. Design and Perception Testing of a Novel 3-D Autostereoscopic Holographic Display System

    DTIC Science & Technology

    1999-01-01

    developing an autostereoscopic , 3D holographic visual display system. The current holographic system is being used to conduct 3D visual perception studies...Design and Perception Testing of a Novel 3-D Autostereoscopic Holographic Display System Grace M. Bochenek a, Thomas J. Meitzler b, Paul Muench...Warren, MI 48397-5000 ABSTRACT U.S. Army Tank-Automotive Command (TACOM) researchers are in the early stages of developing an autostereoscopic

  18. Monocular accommodation condition in 3D display types through geometrical optics

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  19. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  20. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  1. Monocular 3D see-through head-mounted display via complex amplitude modulation.

    PubMed

    Gao, Qiankun; Liu, Juan; Han, Jian; Li, Xin

    2016-07-25

    The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display.

  2. Multispectral polarization viewing angle analysis of circular polarized stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2010-02-01

    In this paper we propose a method to characterize polarization based stereoscopic 3D displays using multispectral Fourier optics viewing angle measurements. Full polarization analysis of the light emitted by the display in the full viewing cone is made at 31 wavelengths in the visible range. Vertical modulation of the polarization state is observed and explained by the position of the phase shift filter into the display structure. In addition, strong spectral dependence of the ellipticity and polarization degree is observed. These features come from the strong spectral dependence of the phase shift film and introduce some imperfections (color shifts and reduced contrast). Using the measured transmission properties of the two glasses filters, the resulting luminance across each filter is computed for left and right eye views. Monocular contrast for each eye and binocular contrasts are performed in the observer space, and Qualified Monocular and Binocular Viewing Spaces (QMVS and QBVS) can be deduced in the same way as auto-stereoscopic 3D displays allowing direct comparison of the performances.

  3. 3D Navigation and Integrated Hazard Display in Advanced Avionics: Workload, Performance, and Situation Awareness

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Alexander, Amy L.

    2004-01-01

    We examined the ability for pilots to estimate traffic location in an Integrated Hazard Display, and how such estimations should be measured. Twelve pilots viewed static images of traffic scenarios and then estimated the outside world locations of queried traffic represented in one of three display types (2D coplanar, 3D exocentric, and split-screen) and in one of four conditions (display present/blank crossed with outside world present/blank). Overall, the 2D coplanar display best supported both vertical (compared to 3D) and lateral (compared to split-screen) traffic position estimation performance. Costs of the 3D display were associated with perceptual ambiguity. Costs of the split screen display were inferred to result from inappropriate attention allocation. Furthermore, although pilots were faster in estimating traffic locations when relying on memory, accuracy was greatest when the display was available.

  4. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  5. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  6. Color Flat Panel Displays: 3D Autostereoscopic Brassboard and Field Sequential Illumination Technology.

    DTIC Science & Technology

    1997-06-01

    DTI has advanced autostereoscopic and field sequential color (FSC) illumination technologies for flat panel displays. Using a patented backlight...technology, DTI has developed prototype 3D flat panel color display that provides stereoscopic viewing without the need for special glasses or other... autostereoscopic viewing. Discussions of system architecture, critical component specifications and resultant display characteristics are provided. Also

  7. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  8. Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array.

    PubMed

    Choi, Heejin; Park, Jae-Hyeung; Kim, Joohwan; Cho, Seong-Woo; Lee, Byoungho

    2005-10-17

    A wide-viewing-angle 3D/2D convertible display system with a thin structure is proposed that is able to display three-dimensional and two-dimensional images. With the use of a transparent display device in front of a conventional integral imaging system, it is possible to display planar images using the conventional system as a backlight source. By experiments, the proposed method is proven and compared with the conventional one.

  9. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  10. Model-based 3D SAR reconstruction

    NASA Astrophysics Data System (ADS)

    Knight, Chad; Gunther, Jake; Moon, Todd

    2014-06-01

    Three dimensional scene reconstruction with synthetic aperture radar (SAR) is desirable for target recognition and improved scene interpretability. The vertical aperture, which is critical to reconstruct 3D SAR scenes, is almost always sparsely sampled due to practical limitations, which creates an underdetermined problem. This papers explores 3D scene reconstruction using a convex model-based approach. The approach developed is demonstrated on 3D scenes, but can be extended to SAR reconstruction of sparsely sampled signals in the spatial and, or, frequency domains. The model-based approach enables knowledge-aided image formation (KAIF) by incorporating spatial, aspect, and sparsity magnitude terms into the image reconstruction. The incorporation of these terms, which are based on prior scene knowledge, will demonstrate improved results compared to traditional image formation algorithms. The SAR image formation problem is formulated as a second order cone program (SOCP) and the results are demonstrated on 3D scenes using simulated data and data from the GOTCHA data collect.1 The model-based results are contrasted against traditional backprojected images.

  11. Generation of flat viewing zone in DFVZ autostereoscopic multiview 3D display by weighting factor

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ky-Hyuk

    2013-05-01

    A new method is introduced to reduce three crosstalk problems and the brightness variation in 3D image by means of the dynamic fusion of viewing zones (DFVZ) using weighting factor. The new method effectively generates the flat viewing zone at the center of viewing zone. The new type autostereoscopic 3D display can give less brightness variation of 3D image when observer moves.

  12. 3-D display and transmission technologies for telemedicine applications: a review.

    PubMed

    Liu, Qiang; Sclabassi, Robert J; Favalora, Gregg E; Sun, Mingui

    2008-03-01

    Three-dimensional (3-D) visualization technologies have been widely commercialized. These technologies have great potential in a number of telemedicine applications, such as teleconsultation, telesurgery, and remote patient monitoring. This work presents an overview of the state-of-the-art 3-D display devices and related 3-D image/video transmission technologies with the goal of enhancing their utilization in medical applications.

  13. Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher.

    PubMed

    Wang, Qiong-Hua; Ji, Chao-Chao; Li, Lei; Deng, Huan

    2016-01-11

    In this paper, a dual-view integral imaging three-dimensional (3D) display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array is proposed. Two elemental image arrays for two different 3D images are presented by the display panel alternately, and the polarization switcher controls the polarization direction of the light rays synchronously. The two elemental image arrays are modulated by their corresponding and neighboring micro-lenses of the micro-lens array, and reconstruct two different 3D images in viewing zones 1 and 2, respectively. A prototype of the dual-view II 3D display is developed, and it has good performances.

  14. Comparison of 2D and 3D Displays and Sensor Fusion for Threat Detection, Surveillance, and Telepresence

    DTIC Science & Technology

    2003-05-19

    Comparison of 2D and 3D displays and sensor fusion for threat detection, surveillance, and telepresence T. Meitzler, Ph. D.a, D. Bednarz, Ph.D.a, K...camouflaged threats are compared on a two dimensional (2D) display and a three dimensional ( 3D ) display. A 3D display is compared alongside a 2D...technologies that take advantage of 3D and sensor fusion will be discussed. 1. INTRODUCTION Computer driven interactive 3D imaging has made

  15. Lamina 3D display: projection-type depth-fused display using polarization-encoded depth information.

    PubMed

    Park, Soon-gi; Yoon, Sangcheol; Yeom, Jiwoon; Baek, Hogil; Min, Sung-Wook; Lee, Byoungho

    2014-10-20

    In order to realize three-dimensional (3D) displays, various multiplexing methods have been proposed to add the depth dimension to two-dimensional scenes. However, most of these methods have faced challenges such as the degradation of viewing qualities, the requirement of complicated equipment, and large amounts of data. In this paper, we further developed our previous concept, polarization distributed depth map, to propose the Lamina 3D display as a method for encoding and reconstructing depth information using the polarization status. By adopting projection optics to the depth encoding system, reconstructed 3D images can be scaled like images of 2D projection displays. 3D reconstruction characteristics of the polarization-encoded images are analyzed with simulation and experiment. The experimental system is also demonstrated to show feasibility of the proposed method.

  16. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  17. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  18. Development of high-frame-rate LED panel and its applications for stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Tsutsumi, M.; Yamamoto, R.; Kajimoto, K.; Suyama, S.

    2011-03-01

    In this paper, we report development of a high-frame-rate LED display. Full-color images are refreshed at 480 frames per second. In order to transmit such a high frame-rate signal via conventional 120-Hz DVI, we have introduced a spatiotemporal mapping of image signal. A processor of LED image signal and FPGAs in LED modules have been reprogrammed so that four adjacent pixels in the input image are converted into successive four fields. The pitch of LED panel is 20 mm. The developed 480-fps LED display is utilized for stereoscopic 3D display by use of parallax barrier. The horizontal resolution of a viewed image decreases to one-half by the parallax barrier. This degradation is critical for LED because the pitch of LED displays is as large as tens of times of other flat panel displays. We have conducted experiments to improve quality of the viewed image through the parallax barrier. The improvement is based on interpolation by afterimages. It is shown that the HFR LED provides detailed afterimages. Furthermore, the HFR LED has been utilized for unconscious imaging, which provide a sensation of discovery of conscious visual information from unconscious images.

  19. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  20. Multivalent 3D Display of Glycopolymer Chains for Enhanced Lectin Interaction.

    PubMed

    Lin, Kenneth; Kasko, Andrea M

    2015-08-19

    Synthetic glycoprotein conjugates were synthesized through the polymerization of glycomonomers (mannose and/or galactose acrylate) directly from a protein macroinitiator. This design combines the multivalency of polymer structures with 3D display of saccharides randomly arranged around a central protein structure. The conjugates were tested for their interaction with mannose binding lectin (MBL), a key protein of immune complement. Increasing mannose number (controlled through polymer chain length) and density (controlled through comonomer feed ratio of mannose versus galactose) result in greater interaction with MBL. Most significantly, mannose glycopolymers displayed in a multivalent and 3D configuration from the protein exhibit dramatically enhanced interaction with MBL compared to linear glycopolymer chains with similar total valency but lacking 3D display. These findings demonstrate the importance of the 3D presentation of ligand structures for designing biomimetic materials.

  1. Compact multi-projection 3D display using a wedge prism

    NASA Astrophysics Data System (ADS)

    Park, Soon-gi; Lee, Chang-Kun; Lee, Byoungho

    2015-03-01

    We propose a compact multi-projection system based on integral floating method with waveguide projection. Waveguide projection can reduce the projection distance by multiple folding of optical path inside the waveguide. The proposed system is composed of a wedge prism, which is used as a waveguide, multiple projection-units, and an anisotropic screen made of floating lens combined with a vertical diffuser. As the projected image propagates through the wedge prism, it is reflected at the surfaces of prism by total internal reflections, and the final view image is created by the floating lens at the viewpoints. The position of view point is decided by the lens equation, and the interval of view point is calculated by the magnification of collimating lens and interval of projection-units. We believe that the proposed method can be useful for implementing a large-scale autostereoscopic 3D system with high quality of 3D images using projection optics. In addition, the reduced volume of the system will alleviate the restriction of installment condition, and will widen the applications of a multi-projection 3D display.

  2. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  3. Analysis of multiple recording methods for full resolution multi-view autostereoscopic 3D display system incorporating VHOE

    NASA Astrophysics Data System (ADS)

    Hwang, Yong Seok; Cho, Kyu Ha; Kim, Eun Soo

    2014-03-01

    In this paper, we propose multiple recording process of photopolymer for a full-color multi-view including multiple-view auto-stereoscopic 3D display system based on VHOE (Volume Holographic Optical Element). To overcome the problems such as low resolution, and limited viewing zone of conventional 3D-display without glasses, we designed multiple recording condition of VHOE for multi-view display. It is verified that VHOE may be optically made by angle-multiplexed recording of pre-designed multiple-viewing zone that uniformly is recorded through optimized exposuretime scheduling scheme. Here, VHOE-based backlight system for 4-view stereoscopic display is implemented, in which the output beams that playing a role reference beam from LGP(Light guide plate)t may be sequentially synchronized with the respective stereo images displayed on the LCD panel.

  4. The impact of computer display height and desk design on 3D posture during information technology work by young adults.

    PubMed

    Straker, L; Burgess-Limerick, R; Pollock, C; Murray, K; Netto, K; Coleman, J; Skoss, R

    2008-04-01

    Computer display height and desk design to allow forearm support are two critical design features of workstations for information technology tasks. However there is currently no 3D description of head and neck posture with different computer display heights and no direct comparison to paper based information technology tasks. There is also inconsistent evidence on the effect of forearm support on posture and no evidence on whether these features interact. This study compared the 3D head, neck and upper limb postures of 18 male and 18 female young adults whilst working with different display and desk design conditions. There was no substantial interaction between display height and desk design. Lower display heights increased head and neck flexion with more spinal asymmetry when working with paper. The curved desk, designed to provide forearm support, increased scapula elevation/protraction and shoulder flexion/abduction.

  5. Quantitative measurement of eyestrain on 3D stereoscopic display considering the eye foveation model and edge information.

    PubMed

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-05-15

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  6. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance.

    PubMed

    Qiu, Jimmy; Hope, Andrew J; Cho, B C John; Sharpe, Michael B; Dickie, Colleen I; DaCosta, Ralph S; Jaffray, David A; Weersink, Robert A

    2012-10-21

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ∼2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  7. 3D measurement system based on computer-generated gratings

    NASA Astrophysics Data System (ADS)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  8. Crosstalk reduction in auto-stereoscopic projection 3D display system.

    PubMed

    Lee, Kwang-Hoon; Park, Youngsik; Lee, Hyoung; Yoon, Seon Kyu; Kim, Sung-Kyu

    2012-08-27

    In auto-stereoscopic multi-views 3D display systems, the crosstalk and low resolution become problems for taking a clear depth image with the sufficient motion parallax. To solve these problems, we propose the projection-type auto-stereoscopic multi-view 3D display system, in which the hybrid optical system with the lenticular-parallax barrier and multi projectors. Condensing width of the projected unit-pixel image within the lenslet by hybrid optics is the core concept in this proposal. As the result, the point crosstalk is improved 53% and resolution is increased up to 5 times.

  9. Virtual reality 3D headset based on DMD light modulators

    SciTech Connect

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  10. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  11. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  12. Multiview holographic 3D dynamic display by combining a nano-grating patterned phase plate and LCD.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Ye, Yan; Chen, Xiangyu; Chen, Linsen

    2017-01-23

    Limited by the refreshable data volume of commercial spatial light modulator (SLM), electronic holography can hardly provide satisfactory 3D live video. Here we propose a holography based multiview 3D display by separating the phase information of a lightfield from the amplitude information. In this paper, the phase information was recorded by a 5.5-inch 4-view phase plate with a full coverage of pixelated nano-grating arrays. Because only amplitude information need to be updated, the refreshing data volume in a 3D video display was significantly reduced. A 5.5 inch TFT-LCD with a pixel size of 95 μm was used to modulate the amplitude information of a lightfield at a rate of 20 frames per second. To avoid crosstalk between viewing points, the spatial frequency and orientation of each nano-grating in the phase plate was fine tuned. As a result, the transmission light converged to the viewing points. The angular divergence was measured to be 1.02 degrees (FWHM) by average, slightly larger than the diffraction limit of 0.94 degrees. By refreshing the LCD, a series of animated sequential 3D images were dynamically presented at 4 viewing points. The resolution of each view was 640 × 360. Images for each viewing point were well separated and no ghost images were observed. The resolution of the image and the refreshing rate in the 3D dynamic display can be easily improved by employing another SLM. The recoded 3D videos showed the great potential of the proposed holographic 3D display to be used in mobile electronics.

  13. Tunable nonuniform sampling method for fast calculation and intensity modulation in 3D dynamic holographic display.

    PubMed

    Zhang, Zhao; Liu, Juan; Jia, Jia; Li, Xin; Han, Jian; Hu, Bin; Wang, Yongtian

    2013-08-01

    Heavy computational load of computer-generated hologram (CGH) and imprecise intensity modulation of 3D images are crucial problems in dynamic holographic display. The nonuniform sampling method is proposed to speed up CGH generation and precisely modulate the reconstructed intensities of phase-only CGH. The proposed method can eliminate the redundant information properly, where 70% reduction in the storage amount can be reached when it is combined with the novel lookup table method. Multigrayscale modulation of reconstructed 3D images can be achieved successfully. Numerical simulations and optical experiments are performed, and both are in good agreement. It is believed that the proposed method can be used in 3D dynamic holographic display.

  14. Full-color autostereoscopic 3D display system using color-dispersion-compensated synthetic phase holograms.

    PubMed

    Choi, Kyongsik; Kim, Hwi; Lee, Byoungho

    2004-10-18

    A novel full-color autostereoscopic three-dimensional (3D) display system has been developed using color-dispersion-compensated (CDC) synthetic phase holograms (SPHs) on a phase-type spatial light modulator. To design the CDC phase holograms, we used a modified iterative Fourier transform algorithm with scaling constants and phase quantization level constraints. We obtained a high diffraction efficiency (~90.04%), a large signal-to-noise ratio (~9.57dB), and a low reconstruction error (~0.0011) from our simulation results. Each optimized phase hologram was synthesized with each CDC directional hologram for red, green, and blue wavelengths for full-color autostereoscopic 3D display. The CDC SPHs were composed and modulated by only one phase-type spatial light modulator. We have demonstrated experimentally that the designed CDC SPHs are able to generate full-color autostereoscopic 3D images and video frames very well, without any use of glasses.

  15. Optimal projector configuration design for 300-Mpixel multi-projection 3D display.

    PubMed

    Lee, Jin-Ho; Park, Juyong; Nam, Dongkyung; Choi, Seo Young; Park, Du-Sik; Kim, Chang Yeong

    2013-11-04

    To achieve an immersive natural 3D experience on a large screen, a 300-Mpixel multi-projection 3D display that has a 100-inch screen and a 40° viewing angle has been developed. To increase the number of rays emanating from each pixel to 300 in the horizontal direction, three hundred projectors were used. The projector configuration is an important issue in generating a high-quality 3D image, the luminance characteristics were analyzed and the design was optimized to minimize the variation in the brightness of projected images. The rows of the projector arrays were repeatedly changed according to a predetermined row interval and the projectors were arranged in an equi-angular pitch toward the constant central point. As a result, we acquired very smooth motion parallax images without discontinuity. There is no limit of viewing distance, so natural 3D images can be viewed from 2 m to over 20 m.

  16. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  17. See-through multi-view 3D display with parallax barrier

    NASA Astrophysics Data System (ADS)

    Hong, Jong-Young; Lee, Chang-Kun; Park, Soon-gi; Kim, Jonghyun; Cha, Kyung-Hoon; Kang, Ki Hyung; Lee, Byoungho

    2016-03-01

    In this paper, we propose the see-through parallax barrier type multi-view display with transparent liquid crystal display (LCD). The transparency of LCD is realized by detaching the backlight unit. The number of views in the proposed system is minimized to enlarge the aperture size of parallax barrier, which determines the transparency. For compensating the shortness of the number of viewpoints, eye tracking method is applied to provide large number of views and vertical parallax. Through experiments, a prototype of see-through autostereoscopic 3D display with parallax barrier is implemented, and the system parameters of transmittance, crosstalk, and barrier structure perception are analyzed.

  18. Depth cues in human visual perception and their realization in 3D displays

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Häussler, Ralf; Fütterer, Gerald; Leister, Norbert

    2010-04-01

    Over the last decade, various technologies for visualizing three-dimensional (3D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric, or holographic type. Most of the current approaches utilize the conventional stereoscopic principle. But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue. This paper discusses the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth limitations of 3D displays from a physiological point of view.

  19. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  20. 3D brain MR angiography displayed by a multi-autostereoscopic screen

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Ribeiro, Fádua H.; Lima, Fabrício O.; Serra, Rolando L.; Moreno, Alfredo B.; Li, Li M.

    2012-02-01

    The magnetic resonance angiography (MRA) can be used to examine blood vessels in key areas of the body, including the brain. In the MRA, a powerful magnetic field, radio waves and a computer produce the detailed images. Physicians use the procedure in brain images mainly to detect atherosclerosis disease in the carotid artery of the neck, which may limit blood flow to the brain and cause a stroke and identify a small aneurysm or arteriovenous malformation inside the brain. Multi-autostereoscopic displays provide multiple views of the same scene, rather than just two, as in autostereoscopic systems. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The use of 3D imaging in the medical field has proven to be a benefit to doctors when diagnosing patients. For different medical domains a stereoscopic display could be advantageous in terms of a better spatial understanding of anatomical structures, better perception of ambiguous anatomical structures, better performance of tasks that require high level of dexterity, increased learning performance, and improved communication with patients or between doctors. In this work we describe a multi-autostereoscopic system and how to produce 3D MRA images to be displayed with it. We show results of brain MR angiography images discussing, how a 3D visualization can help physicians to a better diagnosis.

  1. A 3D integral imaging optical see-through head-mounted display.

    PubMed

    Hua, Hong; Javidi, Bahram

    2014-06-02

    An optical see-through head-mounted display (OST-HMD), which enables optical superposition of digital information onto the direct view of the physical world and maintains see-through vision to the real world, is a vital component in an augmented reality (AR) system. A key limitation of the state-of-the-art OST-HMD technology is the well-known accommodation-convergence mismatch problem caused by the fact that the image source in most of the existing AR displays is a 2D flat surface located at a fixed distance from the eye. In this paper, we present an innovative approach to OST-HMD designs by combining the recent advancement of freeform optical technology and microscopic integral imaging (micro-InI) method. A micro-InI unit creates a 3D image source for HMD viewing optics, instead of a typical 2D display surface, by reconstructing a miniature 3D scene from a large number of perspective images of the scene. By taking advantage of the emerging freeform optical technology, our approach will result in compact, lightweight, goggle-style AR display that is potentially less vulnerable to the accommodation-convergence discrepancy problem and visual fatigue. A proof-of-concept prototype system is demonstrated, which offers a goggle-like compact form factor, non-obstructive see-through field of view, and true 3D virtual display.

  2. Focus-tunable multi-view holographic 3D display using a 4k LCD panel

    NASA Astrophysics Data System (ADS)

    Lin, Qiaojuan; Sang, Xinzhu; Chen, Zhidong; Yan, Binbin; Yu, Chongxiu; Wang, Peng; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    A focus-tunable multi-view holographic three-dimensional (3D) display system with a 10.1 inch 4K liquid crystal device (LCD) panel is presented. In the proposed synthesizing method, computer-generated hologram (CGH) does not require calculations of light diffraction. When multiple rays pass through one point of a 3D image and enter the pupil simultaneously, the eyes can focus on the point according to the depth cue. Benefiting from the holograms, the dense multiple perspective viewpoints of the 3D object are recorded and combined into the CGH in a dense-super-view way, which make two or more rays emitted from the same point in reconstructed light field into the pupil simultaneously. In general, a wavefront is converged to a viewpoint with the amplitude distribution of multi-view images on the hologram plane, and the phase distribution of a spherical wave is converged to the viewpoint. Here, the wavefronts are calculated according to all the multi-view images and then they are summed up to obtain the object wave on the hologram plane. Moreover, the reference light (converging light) is adopted to converge the central diffraction wave from the liquid crystal display (LCD) into a common area in a short view distance. Experimental results shows that the proposed holographic display can regenerate the 3D objects with focus cues: accommodation and retinal blur.

  3. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  4. Practical resolution requirements of measurement instruments for precise characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Collomb-Patton, Véronique; Bignon, Thibault

    2014-03-01

    Different ways to evaluate the optical performances of auto-stereoscopic 3D displays are reviewed. Special attention is paid to the crosstalk measurements that can be performed by measuring, either the precise angular emission at one or few locations on the display surface, or the full display surface emission from very specific locations in front of the display. Using measurements made in the two ways with different instruments on different auto-stereoscopic displays, we show that measurement instruments need to match the resolution of the human eye to obtain reliable results in both cases. Practical requirements in terms of angular resolution for viewing angle measurement instruments and in terms of spatial resolution for imaging instruments are derived and verified on practical examples.

  5. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  6. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor

    NASA Astrophysics Data System (ADS)

    Taherkhani, Reza; Kia, Mohammad

    2012-09-01

    This paper describes the design and building of a low cost and practical stereoscopic display that does not need to wear special glasses, and uses eye tracking to give a large degree of freedom to viewer (or viewer's) movement while displaying the minimum amount of information. The parallax barrier technique is employed to turn a LCD into an auto-stereoscopic display. The stereo image pair is screened on the usual liquid crystal display simultaneously but in different columns of pixels. Controlling of the display in red-green-blue sub pixels increases the accuracy of light projecting direction to less than 2 degrees without losing too much LCD's resolution and an eye-tracking system determines the correct angle to project the images along the viewer's eye pupils and an image processing system puts the 3D images data in correct R-G-B sub pixels. 1.6 degree of light direction controlling achieved in practice. The 3D monitor is just made by applying some simple optical materials on a usual LCD display with normal resolution. [Figure not available: see fulltext.

  7. Key factors in the design of a LED volumetric 3D display system

    NASA Astrophysics Data System (ADS)

    Lin, Yuanfang; Liu, Xu; Yao, Yi; Zhang, Xiaojie; Liu, Xiangdong; Lin, Fengchun

    2005-01-01

    Through careful consideration of key factors that impact upon voxel attributes and image quality, a volumetric three-dimensional (3D) display system employing the rotation of a two-dimensional (2D) thin active panel was developed. It was designed as a lower-cost 3D visualization platform for experimentation and demonstration. Light emitting diodes (LEDs) were arranged into a 256x64 dot matrix on a single surface of the panel, which was positioned symmetrically about the axis of rotation. The motor and necessary supporting structures were located below the panel. LEDs individually of 500 ns response time, 1.6 mm×0.8 mm×0.6 mm external dimensions, 0.38 mm×0.43 mm horizontal and vertical spacing were adopted. The system is functional, providing 512×256×64, i.e. over 8 million addressable voxels within a 292 mm×165 mm cylindrical volume at a refresh frequency in excess of 16 Hz. Due to persistence of vision, momentarily addressed voxels will be perceived and fused into a 3D image. Many static or dynamic 3D scenes were displayed, which can be directly viewed from any position with few occlusion zones and dead zones. Important depth cues like binocular disparity and motion parallax are satisfied naturally.

  8. Full-color 3D display using binary phase modulation and speckle reduction

    NASA Astrophysics Data System (ADS)

    Matoba, Osamu; Masuda, Kazunobu; Harada, Syo; Nitta, Kouichi

    2016-06-01

    One of the 3D display systems for full-color reconstruction by using binary phase modulation is presented. The improvement of reconstructed objects is achieved by optimizing the binary phase modulation and accumulating the speckle patterns by changing the random phase distributions. The binary phase pattern is optimized by the modified Frenel ping-pong algorithm. Numerical and experimental demonstrations of full color reconstruction are presented.

  9. 3D display for enhanced tele-operation and other applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-04-01

    In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  10. 3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC).

    PubMed

    Navarro, H; Martínez-Cuenca, R; Saavedra, G; Martínez-Corral, M; Javidi, B

    2010-12-06

    Previously, we reported a digital technique for formation of real, non-distorted, orthoscopic integral images by direct pickup. However the technique was constrained to the case of symmetric image capture and display systems. Here, we report a more general algorithm which allows the pseudoscopic to orthoscopic transformation with full control over the display parameters so that one can generate a set of synthetic elemental images that suits the characteristics of the Integral-Imaging monitor and permits control over the depth and size of the reconstructed 3D scene.

  11. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ.

    PubMed

    Wu, Bing; Klatzky, Roberta L; Stetten, George

    2010-03-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.

  12. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  13. Depth-of-Focus Affects 3D Perception in Stereoscopic Displays.

    PubMed

    Vienne, Cyril; Blondé, Laurent; Mamassian, Pascal

    2015-01-01

    Stereoscopic systems present binocular images on planar surface at a fixed distance. They induce cues to flatness, indicating that images are presented on a unique surface and specifying the relative depth of that surface. The center of interest of this study is on a second problem, arising when a 3D object distance differs from the display distance. As binocular disparity must be scaled using an estimate of viewing distance, object depth can thus be affected through disparity scaling. Two previous experiments revealed that stereoscopic displays can affect depth perception due to conflicting accommodation and vergence cues at near distances. In this study, depth perception is evaluated for farther accommodation and vergence distances using a commercially available 3D TV. In Experiment I, we evaluated depth perception of 3D stimuli at different vergence distances for a large pool of participants. We observed a strong effect of vergence distance that was bigger for younger than for older participants, suggesting that the effect of accommodation was reduced in participants with emerging presbyopia. In Experiment 2, we extended 3D estimations by varying both the accommodation and vergence distances. We also tested the hypothesis that setting accommodation open loop by constricting pupil size could decrease the contribution of focus cues to perceived distance. We found that the depth constancy was affected by accommodation and vergence distances and that the accommodation distance effect was reduced with a larger depth-of-focus. We discuss these results with regard to the effectiveness of focus cues as a distance signal. Overall, these results highlight the importance of appropriate focus cues in stereoscopic displays at intermediate viewing distances.

  14. Characterizing the effects of droplines on target acquisition performance on a 3-D perspective display

    NASA Technical Reports Server (NTRS)

    Liao, Min-Ju; Johnson, Walter W.

    2004-01-01

    The present study investigated the effects of droplines on target acquisition performance on a 3-D perspective display in which participants were required to move a cursor into a target cube as quickly as possible. Participants' performance and coordination strategies were characterized using both Fitts' law and acquisition patterns of the 3 viewer-centered target display dimensions (azimuth, elevation, and range). Participants' movement trajectories were recorded and used to determine movement times for acquisitions of the entire target and of each of its display dimensions. The goodness of fit of the data to a modified Fitts function varied widely among participants, and the presence of droplines did not have observable impacts on the goodness of fit. However, droplines helped participants navigate via straighter paths and particularly benefited range dimension acquisition. A general preference for visually overlapping the target with the cursor prior to capturing the target was found. Potential applications of this research include the design of interactive 3-D perspective displays in which fast and accurate selection and manipulation of content residing at multiple ranges may be a challenge.

  15. Technical solutions for a full-resolution autostereoscopic 2D/3D display technology

    NASA Astrophysics Data System (ADS)

    Stolle, Hagen; Olaya, Jean-Christophe; Buschbeck, Steffen; Sahm, Hagen; Schwerdtner, Armin

    2008-02-01

    Auto-stereoscopic 3D displays capable of high quality, full-resolution images for multiple users can only be created with time-sequential systems incorporating eye tracking and a dedicated optical design. The availability of high speed displays with 120Hz and faster eliminated one of the major hurdles for commercial solutions. Results of alternative display solutions from SeeReal show the impact of optical design on system performance and product features. Depending on the manufacturer's capabilities, system complexity can be shifted from optics to SLM with an impact on viewing angle, number of users and energy efficiency, but also on manufacturing processes. A proprietary solution for eye tracking from SeeReal demonstrates that the required key features can be achieved and implemented in commercial systems in a reasonably short time.

  16. The azimuth projection for the display of 3-D EEG data.

    PubMed

    Wu, Dan; Yao, Dezhong

    2007-12-01

    Electroencephalogram (EEG) is a scalp record of the neural electric activities of the brain. There are many kinds of methods to display the EEG data, such as a projective plane or the realistic head surface. In this work, one of the atlas projection methods, azimuth conformal projection, was tested and recommended as a new way of a planar EEG display. The method details are given and numerically compared with the normal projective plane display. The results indicate that the azimuth projection has many advantages: the transform is simple, convenient, and it can keep all the information. It shows all the information in the 3-D data within a projective plane without distinct shape change. Therefore, it can help to analyze the data effectively.

  17. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  18. A guide for human factors research with stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Pinkus, Alan R.

    2015-05-01

    In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.

  19. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  20. Stereoscopic 3D display technique using spatiotemporal interlacing has improved spatial and temporal properties.

    PubMed

    Johnson, Paul V; Kim, Joohwan; Banks, Martin S

    2015-04-06

    Stereoscopic 3D (S3D) displays use spatial or temporal interlacing to send different images to the two eyes. Temporal interlacing delivers images to the left and right eyes alternately in time; it has high effective spatial resolution but is prone to temporal artifacts. Spatial interlacing delivers even pixel rows to one eye and odd rows to the other eye simultaneously; it is subject to spatial limitations such as reduced spatial resolution. We propose a spatiotemporal-interlacing protocol that interlaces the left- and right-eye views spatially, but with the rows being delivered to each eye alternating with each frame. We performed psychophysical experiments and found that flicker, motion artifacts, and depth distortion are substantially reduced relative to the temporal-interlacing protocol, and spatial resolution is better than in the spatial-interlacing protocol. Thus, the spatiotemporal-interlacing protocol retains the benefits of spatial and temporal interlacing while minimizing or even eliminating the drawbacks.

  1. High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display.

    PubMed

    Chang, Yu-Cheng; Jen, Tai-Hsiang; Ting, Chih-Hung; Huang, Yi-Pai

    2014-02-10

    A 2D/3D switchable and rotatable autostereoscopic display using a high-resistance liquid-crystal (Hi-R LC) lens array is investigated in this paper. Using high-resistance layers in an LC cell, a gradient electric-field distribution can be formed, which can provide a better lens-like shape of the refractive-index distribution. The advantages of the Hi-R LC lens array are its 2D/3D switchability, rotatability (in the horizontal and vertical directions), low driving voltage (~2 volts) and fast response (~0.6 second). In addition, the Hi-R LC lens array requires only a very simple fabrication process.

  2. Stereoscopic 3D display technique using spatiotemporal interlacing has improved spatial and temporal properties

    PubMed Central

    Johnson, Paul V.; Kim, Joohwan; Banks, Martin S.

    2015-01-01

    Stereoscopic 3D (S3D) displays use spatial or temporal interlacing to send different images to the two eyes. Temporal interlacing delivers images to the left and right eyes alternately in time; it has high effective spatial resolution but is prone to temporal artifacts. Spatial interlacing delivers even pixel rows to one eye and odd rows to the other eye simultaneously; it is subject to spatial limitations such as reduced spatial resolution. We propose a spatiotemporal-interlacing protocol that interlaces the left- and right-eye views spatially, but with the rows being delivered to each eye alternating with each frame. We performed psychophysical experiments and found that flicker, motion artifacts, and depth distortion are substantially reduced relative to the temporal-interlacing protocol, and spatial resolution is better than in the spatial-interlacing protocol. Thus, the spatiotemporal-interlacing protocol retains the benefits of spatial and temporal interlacing while minimizing or even eliminating the drawbacks. PMID:25968758

  3. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  4. Sound localization with head movement: implications for 3-d audio displays

    PubMed Central

    McAnally, Ken I.; Martin, Russell L.

    2014-01-01

    Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy. PMID:25161605

  5. Multi-user 3D display using a head tracker and RGB laser illumination source

    NASA Astrophysics Data System (ADS)

    Surman, Phil; Sexton, Ian; Hopf, Klaus; Bates, Richard; Lee, Wing Kai; Buckley, Edward

    2007-05-01

    A glasses-free (auto-stereoscopic) 3D display that will serve several viewers who have freedom of movement over a large viewing region is described. This operates on the principle of employing head position tracking to provide regions referred to as exit pupils that follow the positions ofthe viewers' eyes in order for appropriate left and right images to be seen. A non-intrusive multi-user head tracker controls the light sources of a specially designed backlight that illuminates a direct-view LCD.

  6. Investigation of a 3D head-mounted projection display using retro-reflective screen.

    PubMed

    Héricz, Dalma; Sarkadi, Tamás; Lucza, Viktor; Kovács, Viktor; Koppa, Pál

    2014-07-28

    We propose a compact head-worn 3D display which provides glasses-free full motion parallax. Two picoprojectors placed on the viewer's head project images on a retro-reflective screen that reflects left and right images to the appropriate eyes of the viewer. The properties of different retro-reflective screen materials have been investigated, and the key parameters of the projection - brightness and cross-talk - have been calculated. A demonstration system comprising two projectors, a screen tracking system and a commercial retro-reflective screen has been developed to test the visual quality of the proposed approach.

  7. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  8. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  9. Spatial orientation in 3-D desktop displays: using rooms for organizing information.

    PubMed

    Colle, Herbert A; Reid, Gary B

    2003-01-01

    Understanding how spatial knowledge is acquired is important for spatial navigation and for improving the design of 3-D perspective interfaces. Configural spatial knowledge of object locations inside rooms is learned rapidly and easily (Colle & Reid, 1998), possibly because rooms afford local viewing in which objects are directly viewed or, alternatively, because of their structural features. The local viewing hypothesis predicts that the layout of objects outside of rooms also should be rapidly acquired when walls are removed and rooms are sufficiently close that participants can directly view and identify objects. It was evaluated using pointing and sketch map measures of configural knowledge with and without walls by varying distance, lighting levels, and observation instructions. Although within-room spatial knowledge was uniformly good, local viewing was not sufficient for improving spatial knowledge of objects in different rooms. Implications for navigation and 3-D interface design are discussed. Actual or potential applications of this research include the design of user interfaces, especially interfaces with 3-D displays.

  10. Mitral valve analysis using a novel 3D holographic display: a feasibility study of 3D ultrasound data converted to a holographic screen.

    PubMed

    Beitnes, Jan Otto; Klæboe, Lars Gunnar; Karlsen, Jørn Skaarud; Urheim, Stig

    2015-02-01

    The aim of the present study was to test the feasibility of analyzing 3D ultrasound data on a novel holographic display. An increasing number of mini-invasive procedures for mitral valve repair require more effective visualization to improve patient safety and speed of procedures. A novel 3D holographic display has been developed and may have the potential to guide interventional cardiac procedures in the near future. Forty patients with degenerative mitral valve disease were analyzed. All had complete 2D transthoracic (TTE) and transoesophageal (TEE) echocardiographic examinations. In addition, 3D TTE of the mitral valve was obtained and recordings were converted from the echo machine to the holographic screen. Visual inspection of the mitral valve during surgery or TEE served as the gold standard. 240 segments were analyzed by 2 independent observers. A total of 53 segments were prolapsing. The majority included P2 (31), the remaining located at A2 (8), A3 (6), P3 (5), P1 (2) and A1 (1). The sensitivity and specificity of the 3D display was 87 and 99 %, respectively (observer I), and for observer II 85 and 97 %, respectively. The accuracies and precisions were 96.7 and 97.9 %, respectively, (observer I), 94.3 and 88.2 % (observer II), and inter-observer agreement was 0.954 with Cohen's Kappa 0.86. We were able to convert 3D ultrasound data to the holographic display. A very high accuracy and precision was shown, demonstrating the feasibility of analyzing 3D echo of the mitral valve on the holographic screen.

  11. Subsampling models and anti-alias filters for 3-D automultiscopic displays.

    PubMed

    Konrad, Janusz; Agniel, Philippe

    2006-01-01

    A new type of three-dimensional (3-D) display recently introduced on the market holds great promise for the future of 3-D visualization, communication, and entertainment. This so-called automultiscopic display can deliver multiple views without glasses, thus allowing a limited "look-around" (correct motion-parallax). Central to this technology is the process of multiplexing several views into a single viewable image. This multiplexing is a complex process involving irregular subsampling of the original views. If not preceded by low-pass filtering, it results in aliasing that leads to texture as well as depth distortions. In order to eliminate this aliasing, we propose to model the multiplexing process with lattices, find their parameters and then design optimal anti-alias filters. To this effect, we use multidimensional sampling theory and basic optimization tools. We derive optimal anti-alias filters for a specific automultiscopic monitor using three models: the orthogonal lattice, the nonorthogonal lattice, and the union of shifted lattices. In the first case, the resulting separable low-pass filter offers significant aliasing reduction that is further improved by hexagonal-passband low-pass filter for the nonorthogonal lattice model. A more accurate model is obtained using union of shifted lattices, but due to the complex nature of repeated spectra, practical filters designed in this case offer no additional improvement. We also describe a practical method to design finite-precision, low-complexity filters that can be implemented using modern graphics cards.

  12. Holographic display system for dynamic synthesis of 3D light fields with increased space bandwidth product.

    PubMed

    Agour, Mostafa; Falldorf, Claas; Bergmann, Ralf B

    2016-06-27

    We present a new method for the generation of a dynamic wave field with high space bandwidth product (SBP). The dynamic wave field is generated from several wave fields diffracted by a display which comprises multiple spatial light modulators (SLMs) each having a comparably low SBP. In contrast to similar approaches in stereoscopy, we describe how the independently generated wave fields can be coherently superposed. A major benefit of the scheme is that the display system may be extended to provide an even larger display. A compact experimental configuration which is composed of four phase-only SLMs to realize the coherent combination of independent wave fields is presented. Effects of important technical parameters of the display system on the wave field generated across the observation plane are investigated. These effects include, e.g., the tilt of the individual SLM and the gap between the active areas of multiple SLMs. As an example of application, holographic reconstruction of a 3D object with parallax effects is demonstrated.

  13. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction

    PubMed Central

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-01-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  14. Anatomy-based 3D skeleton extraction from femur model.

    PubMed

    Gharenazifam, Mina; Arbabi, Ehsan

    2014-11-01

    Using 3D models of bones can highly improve accuracy and reliability of orthopaedic evaluation. However, it may impose excessive computational load. This article proposes a fully automatic method for extracting a compact model of the femur from its 3D model. The proposed method works by extracting a 3D skeleton based on the clinical parameters of the femur. Therefore, in addition to summarizing a 3D model of the bone, the extracted skeleton would preserve important clinical and anatomical information. The proposed method has been applied on 3D models of 10 femurs and the results have been evaluated for different resolutions of data.

  15. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  16. Three-dimensional display modes for CT colonography: conventional 3D virtual colonoscopy versus unfolded cube projection.

    PubMed

    Vos, Frans M; van Gelder, Rogier E; Serlie, Iwo W O; Florie, Jasper; Nio, C Yung; Glas, Afina S; Post, Frits H; Truyen, Roel; Gerritsen, Frans A; Stoker, Jaap

    2003-09-01

    The authors compared a conventional two-directional three-dimensional (3D) display for computed tomography (CT) colonography with an alternative method they developed on the basis of time efficiency and surface visibility. With the conventional technique, 3D ante- and retrograde cine loops were obtained (hereafter, conventional 3D). With the alternative method, six projections were obtained at 90 degrees viewing angles (unfolded cube display). Mean evaluation time per patient with the conventional 3D display was significantly longer than that with the unfolded cube display. With the conventional 3D method, 93.8% of the colon surface came into view; with the unfolded cube method, 99.5% of the colon surface came into view. Sensitivity and specificity were not significantly different between the two methods. Agreements between observers were kappa = 0.605 for conventional 3D display and kappa = 0.692 for unfolded cube display. Consequently, the latter method enhances the 3D endoluminal display with improved time efficiency and higher surface visibility.

  17. Cloud Based Web 3d GIS Taiwan Platform

    NASA Astrophysics Data System (ADS)

    Tsai, W.-F.; Chang, J.-Y.; Yan, S. Y.; Chen, B.

    2011-09-01

    This article presents the status of the web 3D GIS platform, which has been developed in the National Applied Research Laboratories. The purpose is to develop a global earth observation 3D GIS platform for applications to disaster monitoring and assessment in Taiwan. For quick response to preliminary and detailed assessment after a natural disaster occurs, the web 3D GIS platform is useful to access, transfer, integrate, display and analyze the multi-scale huge data following the international OGC standard. The framework of cloud service for data warehousing management and efficiency enhancement using VMWare is illustrated in this article.

  18. Frames-Based Denoising in 3D Confocal Microscopy Imaging.

    PubMed

    Konstantinidis, Ioannis; Santamaria-Pang, Alberto; Kakadiaris, Ioannis

    2005-01-01

    In this paper, we propose a novel denoising method for 3D confocal microscopy data based on robust edge detection. Our approach relies on the construction of a non-separable frame system in 3D that incorporates the Sobel operator in dual spatial directions. This multidirectional set of digital filters is capable of robustly detecting edge information by ensemble thresholding of the filtered data. We demonstrate the application of our method to both synthetic and real confocal microscopy data by comparing it to denoising methods based on separable 3D wavelets and 3D median filtering, and report very encouraging results.

  19. Cylindrical liquid crystal lenses system for autostereoscopic 2D/3D display

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Wei; Huang, Yi-Pai; Chang, Yu-Cheng; Wang, Po-Hao; Chen, Po-Chuan; Tsai, Chao-Hsu

    2012-06-01

    The liquid crystal lenses system, which could be electrically controlled easily for autostereoscopic 2D/3D switchable display was proposed. The High-Resistance liquid crystal (HRLC) lens utilized less controlled electrodes and coated a high-resistance layer between the controlled-electrodes was proposed and was used in this paper. Compare with the traditional LC lens, the HR-LC Lens could provide smooth electric-potential distribution within the LC layer under driving status. Hence, the proposed HR-LC Lens had less circuit complexity, low driving voltage, and good optical performance also could be obtained. In addition, combining with the proposed driving method called dual-directional overdriving method, the above method could reduce the switching time by applying large voltage onto cell. Consequently, the total switching time could be further reduced to around 2second. It is believed that the LC lens system has high potential in the future.

  20. 3D display and image processing system for metal bellows welding

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Son, Jung-Young

    2010-04-01

    Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.

  1. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience.

  2. Full parallax viewing-angle enhanced computer-generated holographic 3D display system using integral lens array.

    PubMed

    Choi, Kyongsik; Kim, Joohwan; Lim, Yongjun; Lee, Byoungho

    2005-12-26

    A novel full parallax and viewing-angle enhanced computer-generated holographic (CGH) three-dimensional (3D) display system is proposed and implemented by combining an integral lens array and colorized synthetic phase holograms displayed on a phase-type spatial light modulator. For analyzing the viewing-angle limitations of our CGH 3D display system, we provide some theoretical background and introduce a simple ray-tracing method for 3D image reconstruction. From our method we can get continuously varying full parallax 3D images with the viewing angle about +/-6 degrees . To design the colorized phase holograms, we used a modified iterative Fourier transform algorithm and we could obtain a high diffraction efficiency (~92.5%) and a large signal-to-noise ratio (~11dB) from our simulation results. Finally we show some experimental results that verify our concept and demonstrate the full parallax viewing-angle enhanced color CGH display system.

  3. Assessment of 3D Viewers for the Display of Interactive Documents in the Learning of Graphic Engineering

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia

    2012-01-01

    The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…

  4. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  5. Review: Polymeric-Based 3D Printing for Tissue Engineering.

    PubMed

    Wu, Geng-Hsi; Hsu, Shan-Hui

    Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.

  6. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  7. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  8. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  9. Surgeon-Based 3D Printing for Microvascular Bone Flaps.

    PubMed

    Taylor, Erin M; Iorio, Matthew L

    2017-03-04

    Background Three-dimensional (3D) printing has developed as a revolutionary technology with the capacity to design accurate physical models in preoperative planning. We present our experience in surgeon-based design of 3D models, using home 3D software and printing technology for use as an adjunct in vascularized bone transfer. Methods Home 3D printing techniques were used in the design and execution of vascularized bone flap transfers to the upper extremity. Open source imaging software was used to convert preoperative computed tomography scans and create 3D models. These were printed in the surgeon's office as 3D models for the planned reconstruction. Vascularized bone flaps were designed intraoperatively based on the 3D printed models. Results Three-dimensional models were created for intraoperative use in vascularized bone flaps, including (1) medial femoral trochlea (MFT) flap for scaphoid avascular necrosis and nonunion, (2) MFT flap for lunate avascular necrosis and nonunion, (3) medial femoral condyle (MFC) flap for wrist arthrodesis, and (4) free fibula osteocutaneous flap for distal radius septic nonunion. Templates based on the 3D models allowed for the precise and rapid contouring of well-vascularized bone flaps in situ, prior to ligating the donor pedicle. Conclusions Surgeon-based 3D printing is a feasible, innovative technology that allows for the precise and rapid contouring of models that can be created in various configurations for pre- and intraoperative planning. The technology is easy to use, convenient, and highly economical as compared with traditional send-out manufacturing. Surgeon-based 3D printing is a useful adjunct in vascularized bone transfer. Level of Evidence Level IV.

  10. 3D Printing of Carbon Nanotubes-Based Microsupercapacitors.

    PubMed

    Yu, Wei; Zhou, Han; Li, Ben Q; Ding, Shujiang

    2017-02-08

    A novel 3D printing procedure is presented for fabricating carbon-nanotubes (CNTs)-based microsupercapacitors. The 3D printer uses a CNTs ink slurry with a moderate solid content and prints a stream of continuous droplets. Appropriate control of a heated base is applied to facilitate the solvent removal and adhesion between printed layers and to improve the structure integrity without structure delamination or distortion upon drying. The 3D-printed electrodes for microsupercapacitors are characterized by SEM, laser scanning confocal microscope, and step profiler. Effect of process parameters on 3D printing is also studied. The final solid-state microsupercapacitors are assembled with the printed multilayer CNTs structures and poly(vinyl alcohol)-H3PO4 gel as the interdigitated microelectrodes and electrolyte. The electrochemical performance of 3D printed microsupercapacitors is also tested, showing a significant areal capacitance and excellent cycle stability.

  11. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  12. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  13. 3D face recognition by projection-based methods

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Sankur, Bülent; Yemez, Yücel

    2006-02-01

    In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces. Some features are data driven, such as ICA-based features or NNMF-based features. Other features are obtained using DFT or DCT-based schemes. We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel. We consider both global and local features. Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images. The block-based local features are fused both at feature level and at decision level. The resulting feature vectors are matched using Linear Discriminant Analysis. Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset.

  14. 3D Printed Graphene Based Energy Storage Devices.

    PubMed

    Foster, Christopher W; Down, Michael P; Zhang, Yan; Ji, Xiaobo; Rowley-Neale, Samuel J; Smith, Graham C; Kelly, Peter J; Banks, Craig E

    2017-03-03

    3D printing technology provides a unique platform for rapid prototyping of numerous applications due to its ability to produce low cost 3D printed platforms. Herein, a graphene-based polylactic acid filament (graphene/PLA) has been 3D printed to fabricate a range of 3D disc electrode (3DE) configurations using a conventional RepRap fused deposition moulding (FDM) 3D printer, which requires no further modification/ex-situ curing step. To provide proof-of-concept, these 3D printed electrode architectures are characterised both electrochemically and physicochemically and are advantageously applied as freestanding anodes within Li-ion batteries and as solid-state supercapacitors. These freestanding anodes neglect the requirement for a current collector, thus offering a simplistic and cheaper alternative to traditional Li-ion based setups. Additionally, the ability of these devices' to electrochemically produce hydrogen via the hydrogen evolution reaction (HER) as an alternative to currently utilised platinum based electrodes (with in electrolysers) is also performed. The 3DE demonstrates an unexpectedly high catalytic activity towards the HER (-0.46 V vs. SCE) upon the 1000th cycle, such potential is the closest observed to the desired value of platinum at (-0.25 V vs. SCE). We subsequently suggest that 3D printing of graphene-based conductive filaments allows for the simple fabrication of energy storage devices with bespoke and conceptual designs to be realised.

  15. 3D Printed Graphene Based Energy Storage Devices

    PubMed Central

    Foster, Christopher W.; Down, Michael P.; Zhang, Yan; Ji, Xiaobo; Rowley-Neale, Samuel J.; Smith, Graham C.; Kelly, Peter J.; Banks, Craig E.

    2017-01-01

    3D printing technology provides a unique platform for rapid prototyping of numerous applications due to its ability to produce low cost 3D printed platforms. Herein, a graphene-based polylactic acid filament (graphene/PLA) has been 3D printed to fabricate a range of 3D disc electrode (3DE) configurations using a conventional RepRap fused deposition moulding (FDM) 3D printer, which requires no further modification/ex-situ curing step. To provide proof-of-concept, these 3D printed electrode architectures are characterised both electrochemically and physicochemically and are advantageously applied as freestanding anodes within Li-ion batteries and as solid-state supercapacitors. These freestanding anodes neglect the requirement for a current collector, thus offering a simplistic and cheaper alternative to traditional Li-ion based setups. Additionally, the ability of these devices’ to electrochemically produce hydrogen via the hydrogen evolution reaction (HER) as an alternative to currently utilised platinum based electrodes (with in electrolysers) is also performed. The 3DE demonstrates an unexpectedly high catalytic activity towards the HER (−0.46 V vs. SCE) upon the 1000th cycle, such potential is the closest observed to the desired value of platinum at (−0.25 V vs. SCE). We subsequently suggest that 3D printing of graphene-based conductive filaments allows for the simple fabrication of energy storage devices with bespoke and conceptual designs to be realised. PMID:28256602

  16. 3D Printed Graphene Based Energy Storage Devices

    NASA Astrophysics Data System (ADS)

    Foster, Christopher W.; Down, Michael P.; Zhang, Yan; Ji, Xiaobo; Rowley-Neale, Samuel J.; Smith, Graham C.; Kelly, Peter J.; Banks, Craig E.

    2017-03-01

    3D printing technology provides a unique platform for rapid prototyping of numerous applications due to its ability to produce low cost 3D printed platforms. Herein, a graphene-based polylactic acid filament (graphene/PLA) has been 3D printed to fabricate a range of 3D disc electrode (3DE) configurations using a conventional RepRap fused deposition moulding (FDM) 3D printer, which requires no further modification/ex-situ curing step. To provide proof-of-concept, these 3D printed electrode architectures are characterised both electrochemically and physicochemically and are advantageously applied as freestanding anodes within Li-ion batteries and as solid-state supercapacitors. These freestanding anodes neglect the requirement for a current collector, thus offering a simplistic and cheaper alternative to traditional Li-ion based setups. Additionally, the ability of these devices’ to electrochemically produce hydrogen via the hydrogen evolution reaction (HER) as an alternative to currently utilised platinum based electrodes (with in electrolysers) is also performed. The 3DE demonstrates an unexpectedly high catalytic activity towards the HER (‑0.46 V vs. SCE) upon the 1000th cycle, such potential is the closest observed to the desired value of platinum at (‑0.25 V vs. SCE). We subsequently suggest that 3D printing of graphene-based conductive filaments allows for the simple fabrication of energy storage devices with bespoke and conceptual designs to be realised.

  17. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  18. 3D object recognition based on local descriptors

    NASA Astrophysics Data System (ADS)

    Jakab, Marek; Benesova, Wanda; Racev, Marek

    2015-01-01

    In this paper, we propose an enhanced method of 3D object description and recognition based on local descriptors using RGB image and depth information (D) acquired by Kinect sensor. Our main contribution is focused on an extension of the SIFT feature vector by the 3D information derived from the depth map (SIFT-D). We also propose a novel local depth descriptor (DD) that includes a 3D description of the key point neighborhood. Thus defined the 3D descriptor can then enter the decision-making process. Two different approaches have been proposed, tested and evaluated in this paper. First approach deals with the object recognition system using the original SIFT descriptor in combination with our novel proposed 3D descriptor, where the proposed 3D descriptor is responsible for the pre-selection of the objects. Second approach demonstrates the object recognition using an extension of the SIFT feature vector by the local depth description. In this paper, we present the results of two experiments for the evaluation of the proposed depth descriptors. The results show an improvement in accuracy of the recognition system that includes the 3D local description compared with the same system without the 3D local description. Our experimental system of object recognition is working near real-time.

  19. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    NASA Astrophysics Data System (ADS)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  20. ePlant and the 3D Data Display Initiative: Integrative Systems Biology on the World Wide Web

    PubMed Central

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J.

    2011-01-01

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed “ePlant” (http://bar.utoronto.ca/eplant) – a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the “3D Data Display Initiative” (http://3ddi.org). PMID:21249219

  1. ePlant and the 3D data display initiative: integrative systems biology on the world wide web.

    PubMed

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J

    2011-01-10

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).

  2. Three-dimensional display based on dual parallax barriers with uniform resolution.

    PubMed

    Lv, Guo-Jiao; Wang, Jun; Zhao, Wu-Xiang; Wang, Qiong-Hua

    2013-08-20

    The 3D display based on a parallax barrier is a low-cost autostereoscopic display. However, the vertical and horizontal resolution of the 3D images displayed on it will be seriously nonuniform as this display has a large number of views. It will worsen the display quality; therefore, a 3D display that consists of a 2D display panel and dual parallax barriers is proposed. With a 2D display panel, the proposed 3D display provides the synthetic images with square pixel units in which the arrangement of pixels can make the 3D image have uniform resolution. With the dual parallax barriers, the proposed 3D display shows the pixels in square pixel units for different horizontal views. Therefore, this display has uniform resolution of 3D images. A four-view prototype of the proposed 3D display is developed, and it provides uniform 3D resolution in the vertical and horizontal directions.

  3. Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images.

    PubMed

    Otsuka, Rieko; Hoshino, Takeshi; Horry, Youichi

    2006-01-01

    Three-dimensional displays are drawing attention as next-generation devices. Some techniques which can reproduce three-dimensional images prepared in advance have already been developed. However, technology for the transmission of 3D moving pictures in real-time is yet to be achieved. In this paper, we present a novel method for 360-degrees viewable 3D displays and the Transpost system in which we implement the method. The basic concept of our system is to project multiple images of the object, taken from different angles, onto a spinning screen. The key to the method is projection of the images onto a directionally reflective screen with a limited viewing angle. The images are reconstructed to give the viewer a three-dimensional image of the object displayed on the screen. The display system can present images of computer-graphics pictures, live pictures, and movies. Furthermore, the reverse optical process of that in the display system can be used to record images of the subject from multiple directions. The images can then be transmitted to the display in real-time. We have developed prototypes of a 3D display and a 3D human-image transmission system. Our preliminary working prototypes demonstrate new possibilities of expression and forms of communication.

  4. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  5. Arctic Research Mapping Application 3D Geobrowser: Accessing and Displaying Arctic Information From the Desktop to the Web

    NASA Astrophysics Data System (ADS)

    Johnson, G. W.; Gonzalez, J.; Brady, J. J.; Gaylord, A.; Manley, W. F.; Cody, R.; Dover, M.; Score, R.; Garcia-Lavigne, D.; Tweedie, C. E.

    2009-12-01

    ARMAP 3D allows users to dynamically interact with information about U.S. federally funded research projects in the Arctic. This virtual globe allows users to explore data maintained in the Arctic Research & Logistics Support System (ARLSS) database providing a very valuable visual tool for science management and logistical planning, ascertaining who is doing what type of research and where. Users can “fly to” study sites, view receding glaciers in 3D and access linked reports about specific projects. Custom “Search” tasks have been developed to query by researcher name, discipline, funding program, place names and year and display results on the globe with links to detailed reports. ARMAP 3D was created with ESRI’s free ArcGIS Explorer (AGX) new build 900 providing an updated application from build 500. AGX applications provide users the ability to integrate their own spatial data on various data layers provided by ArcOnline (http://resources.esri.com/arcgisonlineservices). Users can add many types of data including OGC web services without any special data translators or costly software. ARMAP 3D is part of the ARMAP suite (http://armap.org), a collection of applications that support Arctic science tools for users of various levels of technical ability to explore information about field-based research in the Arctic. ARMAP is funded by the National Science Foundation Office of Polar Programs Arctic Sciences Division and is a collaborative development effort between the Systems Ecology Lab at the University of Texas at El Paso, Nuna Technologies, the INSTAAR QGIS Laboratory, and CH2M HILL Polar Services.

  6. Powder-based 3D printing for bone tissue engineering.

    PubMed

    Brunello, G; Sivolella, S; Meneghello, R; Ferroni, L; Gardin, C; Piattelli, A; Zavan, B; Bressan, E

    2016-01-01

    Bone tissue engineered 3-D constructs customized to patient-specific needs are emerging as attractive biomimetic scaffolds to enhance bone cell and tissue growth and differentiation. The article outlines the features of the most common additive manufacturing technologies (3D printing, stereolithography, fused deposition modeling, and selective laser sintering) used to fabricate bone tissue engineering scaffolds. It concentrates, in particular, on the current state of knowledge concerning powder-based 3D printing, including a description of the properties of powders and binder solutions, the critical phases of scaffold manufacturing, and its applications in bone tissue engineering. Clinical aspects and future applications are also discussed.

  7. Visual Semantic Based 3D Video Retrieval System Using HDFS

    PubMed Central

    Kumar, C.Ranjith; Suguna, S.

    2016-01-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy. PMID:28003793

  8. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  9. The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.

  10. Does visual fatigue from 3D displays affect autonomic regulation and heart rhythm?

    PubMed

    Park, S; Won, M J; Mun, S; Lee, E C; Whang, M

    2014-02-15

    Most investigations into the negative effects of viewing stereoscopic 3D content on human health have addressed 3D visual fatigue and visually induced motion sickness (VIMS). Very few, however, have looked into changes in autonomic balance and heart rhythm, which are homeostatic factors that ought to be taken into consideration when assessing the overall impact of 3D video viewing on human health. In this study, 30 participants were randomly assigned to two groups: one group watching a 2D video, (2D-group) and the other watching a 3D video (3D-group). The subjects in the 3D-group showed significantly increased heart rates (HR), indicating arousal, and an increased VLF/HF (Very Low Frequency/High Frequency) ratio (a measure of autonomic balance), compared to those in the 2D-group, indicating that autonomic balance was not stable in the 3D-group. Additionally, a more disordered heart rhythm pattern and increasing heart rate (as determined by the R-peak to R-peak (RR) interval) was observed among subjects in the 3D-group compared to subjects in the 2D-group, further indicating that 3D viewing induces lasting activation of the sympathetic nervous system and interrupts autonomic balance.

  11. Open-GL-based stereo system for 3D measurements

    NASA Astrophysics Data System (ADS)

    Boochs, Frank; Gehrhoff, Anja; Neifer, Markus

    2000-05-01

    A stereo system designed and used for the measurement of 3D- coordinates within metric stereo image pairs will be presented. First, the motivation for the development is shown, allowing to evaluate stereo images. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing the measuring functionality required. The presented stereo system is based on PC-hardware equipped with a graphic board and uses an object oriented programming technique. The specific needs of a measuring system are shown and the corresponding requirements which have to be met by the system. The key role of OpenGL is described, which supplies some elementary graphic functions, being directly supported by graphic boards and thus provides the performance needed. Further important aspects as modularity and hardware independence and their value for the solution are shown. Finally some sample functions concerned with image display and handling are presented in more detail.

  12. A time-sequential autostereoscopic 3D display using a vertical line dithering for utilizing the side lobes

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jin; Park, Minyoung

    2014-11-01

    In spite of the developments of various autostereoscopic three-dimensional (3D) technologies, the inferior resolution of the realized 3D image is a severe problem that should be resolved. For that purpose, a time-sequential 3D display is developed to provide 3D images with higher resolution and attracts much attention. Among them, a method using a directional backlight unit (DBLU) is an effective way to be adopted in liquid crystal display (LCD) with higher frame rate such as 120Hz. However, in the conventional time-sequential system, the insufficient frame rate results a flicker problem which means a recognizable fluctuation of image brightness. A dot dithering method can be a good solution for reducing that problem but it was impossible to observe the 3D image in side lobes because the image data and the directivity of light rays from the DBLU do not match in side lobes. In this paper, we propose a new vertical line dithering method to expand the area for 3D image observation by utilizing the side lobes. Since the side lobes locate in the left and right position of the center lobe, it is needed to arrange the image data in LCD panel and directivity of the light rays from the DBLU to have continuity in horizontal direction. Although the observed 3D images in side lobes are flipped ones, the utilization of the side lobes can increase the number of observers in horizontal direction.

  13. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  14. 3D microstructure modeling of compressed fiber-based materials

    NASA Astrophysics Data System (ADS)

    Gaiselmann, Gerd; Tötzke, Christian; Manke, Ingo; Lehnert, Werner; Schmidt, Volker

    2014-07-01

    A novel parametrized model that describes the 3D microstructure of compressed fiber-based materials is introduced. It allows to virtually generate the microstructure of realistically compressed gas-diffusion layers (GDL). Given the input of a 3D microstructure of some fiber-based material, the model compresses the system of fibers in a uniaxial direction for arbitrary compression rates. The basic idea is to translate the fibers in the direction of compression according to a vector field which depends on the rate of compression and on the locations of fibers within the material. In order to apply the model to experimental 3D image data of fiber-based materials given for several compression states, an optimal vector field is estimated by simulated annealing. The model is applied to 3D image data of non-woven GDL in PEMFC gained by synchrotron tomography for different compression rates. The compression model is validated by comparing structural characteristics computed for experimentally compressed and virtually compressed microstructures, where two kinds of compression - using a flat stamp and a stamp with a flow-field profile - are applied. For both stamps types, a good agreement is found. Furthermore, the compression model is combined with a stochastic 3D microstructure model for uncompressed fiber-based materials. This allows to efficiently generate compressed fiber-based microstructures in arbitrary volumes.

  15. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  16. DCT and DST Based Image Compression for 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  17. Full-parallax 3D display from single-shot Kinect capture

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Dorado, Adrián.; Saavedra, Genaro; Martínez-Corral, Manuel; Shin, Donghak; Lee, Byung-Gook

    2015-05-01

    We propose the fusion between two concepts that are very successful in the area of 3D imaging and sensing. Kinect technology permits the registration, in real time, but with low resolution, of accurate depth maps of big, opaque, diffusing 3D scenes. Our proposal consists on transforming the sampled depth map, provided by the Kinect technology, into an array of microimages whose position; pitch and resolution are in good accordance with the characteristics of an integral- imaging monitor. By projecting this information onto such monitor we are able to produce 3D images with continuous perspective and full parallax.

  18. Expanding the degree of freedom of observation on depth-direction by the triple-separated slanted parallax barrier in autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Lee, Kwang-Hoon; Choe, Yeong-Seon; Lee, Dong-Kil; Kim, Yang-Gyu; Park, Youngsik; Park, Min-Chul

    2013-05-01

    Autostereoscopic multi-views 3D display system has a narrow freedom of degrees to the observational directions such as horizontal and perpendicular direction to the display plane than the glasses on type. In this paper, we proposed an innovative method that expanding a width of formed viewing zone on the depth direction keeping with the number of views on horizontal direction by using the triple segmented-slanted parallax barrier (TS-SPB) in the glasses-off type of 3D display. The validity of the proposal is verified by optical simulation based on the environment similar to an actual case. In benefits, the maximum number of views to display on horizontal direction is to be 2n and the width of viewing zone on depth direction is to be increased up to 3.36 times compared to the existing one-layered parallax barrier system.

  19. Voice and gesture-based 3D multimedia presentation tool

    NASA Astrophysics Data System (ADS)

    Fukutake, Hiromichi; Akazawa, Yoshiaki; Okada, Yoshihiro

    2007-09-01

    This paper proposes a 3D multimedia presentation tool that allows the user to manipulate intuitively only through the voice input and the gesture input without using a standard keyboard or a mouse device. The authors developed this system as a presentation tool to be used in a presentation room equipped a large screen like an exhibition room in a museum because, in such a presentation environment, it is better to use voice commands and the gesture pointing input rather than using a keyboard or a mouse device. This system was developed using IntelligentBox, which is a component-based 3D graphics software development system. IntelligentBox has already provided various types of 3D visible, reactive functional components called boxes, e.g., a voice input component and various multimedia handling components. IntelligentBox also provides a dynamic data linkage mechanism called slot-connection that allows the user to develop 3D graphics applications by combining already existing boxes through direct manipulations on a computer screen. Using IntelligentBox, the 3D multimedia presentation tool proposed in this paper was also developed as combined components only through direct manipulations on a computer screen. The authors have already proposed a 3D multimedia presentation tool using a stage metaphor and its voice input interface. This time, we extended the system to make it accept the user gesture input besides voice commands. This paper explains details of the proposed 3D multimedia presentation tool and especially describes its component-based voice and gesture input interfaces.

  20. Influence of limited random-phase of objects on the image quality of 3D holographic display

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  1. Determination of the optimum viewing distance for a multi-view auto-stereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Park, Inkyu; Kim, Sung-Kyu

    2014-09-22

    We present methodologies for determining the optimum viewing distance (OVD) for a multi-view auto-stereoscopic 3D display system with a parallax barrier. The OVD can be efficiently determined as the viewing distance where statistical deviation of centers of quasi-linear distributions of illuminance at central viewing zones is minimized using local areas of a display panel. This method can offer reduced computation time because it does not use the entire area of the display panel during a simulation, but still secures considerable accuracy. The method is verified in experiments, showing its applicability for efficient optical characterization.

  2. Hough transform-based 3D mesh retrieval

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-11-01

    This papre addresses the issue of 3D mesh indexation by using shape descriptors (SDs) under constraints of geometric and topological invariance. A new shape descriptor, the Optimized 3D Hough Transform Descriptor (O3HTD) is here proposed. Intrinsically topologically stable, the O3DHTD is not invariant to geometric transformations. Nevertheless, we show mathematically how the O3DHTD can be optimally associated (in terms of compactness of representation and computational complexity) with a spatial alignment procedure which leads to a geometric invariant behavior. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a categorized ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score and compared to those obtained by applying the MPEg-7 3D SD. It is shown that the O3DHTD outperforms the MPEg-7 3D SD of up to 28%.

  3. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  4. Automated simulation and evaluation of autostereoscopic multiview 3D display designs by time-sequential and wavelength-selective filter barrier

    NASA Astrophysics Data System (ADS)

    Kuhlmey, Mathias; Jurk, Silvio; Duckstein, Bernd; de la Barré, René

    2015-09-01

    A novel simulation tool has been developed for spatial multiplexed 3D displays. Main purpose of our software is the 3D display design with optical image splitter in particular lenticular grids or wavelength-selective barriers. As a result of interaction of image splitter with ray emitting displays a spatial light-modulator generating the autostereoscopic image representation was modeled. Based on the simulation model the interaction of optoelectronic devices with the defined spatial planes is described. Time-sequential multiplexing enables increasing the resolution of such 3D displays. On that reason the program was extended with an intermediate data cumulating component. The simulation program represents a stepwise quasi-static functionality and control of the arrangement. It calculates and renders the whole display ray emission and luminance distribution on viewing distance. The degree of result complexity will increase by using wavelength-selective barriers. Visible images at the viewer's eye positon were determined by simulation after every switching operation of optical image splitter. The summation and evaluation of the resulting data is processed in correspondence to the equivalent time sequence. Hereby the simulation was expanded by a complex algorithm for automated search and validation of possible solutions in the multi-dimensional parameter space. For the multiview 3D display design a combination of ray-tracing and 3D rendering was used. Therefore the emitted light intensity distribution of each subpixel will be evaluated by researching in terms of color, luminance and visible area by using different content distribution on subpixel plane. The analysis of the accumulated data will deliver different solutions distinguished by standards of evaluation.

  5. Multiview and light-field reconstruction algorithms for 360° multiple-projector-type 3D display.

    PubMed

    Zhong, Qing; Peng, Yifan; Li, Haifeng; Su, Chen; Shen, Weidong; Liu, Xu

    2013-07-01

    Both multiview and light-field reconstructions are proposed for a multiple-projector 3D display system. To compare the performance of the reconstruction algorithms in the same system, an optimized multiview reconstruction algorithm with sub-view-zones (SVZs) is proposed. The algorithm divided the conventional view zones in multiview display into several SVZs and allocates more view images. The optimized reconstruction algorithm unifies the conventional multiview reconstruction and light-field reconstruction algorithms, which can indicate the difference in performance when multiview reconstruction is changed to light-field reconstruction. A prototype consisting of 60 projectors with an arc diffuser as its screen is constructed to verify the algorithms. Comparison of different configurations of SVZs shows that light-field reconstruction provides large-scale 3D images with the smoothest motion parallax; thus it may provide better overall performance for large-scale 360° display than multiview reconstruction.

  6. Applications of Alginate-Based Bioinks in 3D Bioprinting

    PubMed Central

    Axpe, Eneko; Oyen, Michelle L.

    2016-01-01

    Three-dimensional (3D) bioprinting is on the cusp of permitting the direct fabrication of artificial living tissue. Multicellular building blocks (bioinks) are dispensed layer by layer and scaled for the target construct. However, only a few materials are able to fulfill the considerable requirements for suitable bioink formulation, a critical component of efficient 3D bioprinting. Alginate, a naturally occurring polysaccharide, is clearly the most commonly employed material in current bioinks. Here, we discuss the benefits and disadvantages of the use of alginate in 3D bioprinting by summarizing the most recent studies that used alginate for printing vascular tissue, bone and cartilage. In addition, other breakthroughs in the use of alginate in bioprinting are discussed, including strategies to improve its structural and degradation characteristics. In this review, we organize the available literature in order to inspire and accelerate novel alginate-based bioink formulations with enhanced properties for future applications in basic research, drug screening and regenerative medicine. PMID:27898010

  7. Perception-based shape retrieval for 3D building models

    NASA Astrophysics Data System (ADS)

    Zhang, Man; Zhang, Liqiang; Takis Mathiopoulos, P.; Ding, Yusi; Wang, Hao

    2013-01-01

    With the help of 3D search engines, a large number of 3D building models can be retrieved freely online. A serious disadvantage of most rotation-insensitive shape descriptors is their inability to distinguish between two 3D building models which are different at their main axes, but appear similar when one of them is rotated. To resolve this problem, we present a novel upright-based normalization method which not only correctly rotates such building models, but also greatly simplifies and accelerates the abstraction and the matching of building models' shape descriptors. Moreover, the abundance of architectural styles significantly hinders the effective shape retrieval of building models. Our research has shown that buildings with different designs are not well distinguished by the widely recognized shape descriptors for general 3D models. Motivated by this observation and to further improve the shape retrieval quality, a new building matching method is introduced and analyzed based on concepts found in the field of perception theory and the well-known Light Field descriptor. The resulting normalized building models are first classified using the qualitative shape descriptors of Shell and Unevenness which outline integral geometrical and topological information. These models are then put in on orderly fashion with the help of an improved quantitative shape descriptor which we will term as Horizontal Light Field Descriptor, since it assembles detailed shape characteristics. To accurately evaluate the proposed methodology, an enlarged building shape database which extends previous well-known shape benchmarks was implemented as well as a model retrieval system supporting inputs from 2D sketches and 3D models. Various experimental performance evaluation results have shown that, as compared to previous methods, retrievals employing the proposed matching methodology are faster and more consistent with human recognition of spatial objects. In addition these performance

  8. Adipose tissue-derived stem cells display a proangiogenic phenotype on 3D scaffolds.

    PubMed

    Neofytou, Evgenios A; Chang, Edwin; Patlola, Bhagat; Joubert, Lydia-Marie; Rajadas, Jayakumar; Gambhir, Sanjiv S; Cheng, Zhen; Robbins, Robert C; Beygui, Ramin E

    2011-09-01

    Ischemic heart disease is the leading cause of death worldwide. Recent studies suggest that adipose tissue-derived stem cells (ASCs) can be used as a potential source for cardiovascular tissue engineering due to their ability to differentiate along the cardiovascular lineage and to adopt a proangiogenic phenotype. To understand better ASCs' biology, we used a novel 3D culture device. ASCs' and b.END-3 endothelial cell proliferation, migration, and vessel morphogenesis were significantly enhanced compared to 2D culturing techniques. ASCs were isolated from inguinal fat pads of 6-week-old GFP+/BLI+ mice. Early passage ASCs cells (P3-P4), PKH26-labeled murine b.END-3 cells or a co-culture of ASCs and b.END-3 cells were seeded at a density of 1 × 10(5) on three different surface configurations: (a) a 2D surface of tissue culture plastic, (b) Matrigel, and (c) a highly porous 3D scaffold fabricated from inert polystyrene. VEGF expression, cell proliferation, and tubulization, were assessed using optical microscopy, fluorescence microscopy, 3D confocal microscopy, and SEM imaging (n = 6). Increased VEGF levels were seen in conditioned media harvested from co-cultures of ASCs and b.END-3 on either Matrigel or a 3D matrix. Fluorescence, confocal, SEM, bioluminescence revealed improved cell, proliferation, and tubule formation for cells seeded on the 3D polystyrene matrix. Collectively, these data demonstrate that co-culturing ASCs with endothelial cells in a 3D matrix environment enable us to generate prevascularized tissue-engineered constructs. This can potentially help us to surpass the tissue thickness limitations faced by the tissue engineering community today.

  9. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  10. Bright 3D display, native and integrated on-chip or system-level

    NASA Astrophysics Data System (ADS)

    Ellwood, Sutherland C., Jr.

    2011-06-01

    Photonica, Inc. has pioneered the use of magneto-optics and hybrid technologies in visual display systems to create arrays addressing hi-speed, solid-state modulators up to 1K times faster that DMD/DLP, yielding high frame-rate and extremely high net native resolution allowing for full-duplication of right eye and left eye modulators at 1080p, DCI 2K, 4K and other specified resolution requirements. The technology enables high-transmission (brightness) per frame. In one version, each integrated image-engine assembly processes binocular frames simultaneously, employing simultaneous right eye/left eye channels, either polarization-based or "Infitec" color-band based channels, as well as pixel-vector based systems. In another version, a multi-chip, massively parallel signal-processing architecture integrates pixel-signal channels to yield simultaneous binocular frames. This may be combined with on-chip integration. Channels are integrated either through optics elements on-chip or through fiber network or both.

  11. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  12. 3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation.

    PubMed

    Yeom, Han-Ju; Kim, Hee-Jae; Kim, Seong-Bok; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo; Park, Jae-Hyeung

    2015-12-14

    We propose a bar-type three-dimensional holographic head mounted display using two holographic optical elements. Conventional stereoscopic head mounted displays may suffer from eye fatigue because the images presented to each eye are two-dimensional ones, which causes mismatch between the accommodation and vergence responses of the eye. The proposed holographic head mounted display delivers three-dimensional holographic images to each eye, removing the eye fatigue problem. In this paper, we discuss the configuration of the bar-type waveguide head mounted displays and analyze the aberration caused by the non-symmetric diffraction angle of the holographic optical elements which are used as input and output couplers. Pre-distortion of the hologram is also proposed in the paper to compensate the aberration. The experimental results show that proposed head mounted display can present three-dimensional see-through holographic images to each eye with correct focus cues.

  13. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  14. 3D web based learning of medical equipment employed in intensive care units.

    PubMed

    Cetin, Aydın

    2012-02-01

    In this paper, both synchronous and asynchronous web based learning of 3D medical equipment models used in hospital intensive care unit have been described over the moodle course management system. 3D medical equipment models were designed with 3ds Max 2008, then converted to ASE format and added interactivity displayed with Viewpoint-Enliven. 3D models embedded in a web page in html format with dynamic interactivity-rotating, panning and zooming by dragging a mouse over images-and descriptive information is embedded to 3D model by using xml format. A pilot test course having 15 h was applied to technicians who is responsible for intensive care unit at Medical Devices Repairing and Maintenance Center (TABOM) of Turkish High Specialized Hospital.

  15. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  16. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  17. 3D geometry-based quantification of colocalizations in multichannel 3D microscopy images of human soft tissue tumors.

    PubMed

    Wörz, Stefan; Sander, Petra; Pfannmöller, Martin; Rieker, Ralf J; Joos, Stefan; Mechtersheimer, Gunhild; Boukamp, Petra; Lichter, Peter; Rohr, Karl

    2010-08-01

    We introduce a new model-based approach for automatic quantification of colocalizations in multichannel 3D microscopy images. The approach uses different 3D parametric intensity models in conjunction with a model fitting scheme to localize and quantify subcellular structures with high accuracy. The central idea is to determine colocalizations between different channels based on the estimated geometry of the subcellular structures as well as to differentiate between different types of colocalizations. A statistical analysis was performed to assess the significance of the determined colocalizations. This approach was used to successfully analyze about 500 three-channel 3D microscopy images of human soft tissue tumors and controls.

  18. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  19. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  20. TransCAIP: A Live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters.

    PubMed

    Taguchi, Yuichi; Koike, Takafumi; Takahashi, Keita; Naemura, Takeshi

    2009-01-01

    The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.

  1. Effective declutter of complex flight displays using stereoptic 3-D cueing

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.; Nold, Dean E.

    1994-01-01

    The application of stereo technology to new, integrated pictorial display formats has been effective in situational awareness enhancements, and stereo has been postulated to be effective for the declutter of complex informational displays. This paper reports a full-factorial workstation experiment performed to verify the potential benefits of stereo cueing for the declutter function in a simulated tracking task. The experimental symbology was designed similar to that of a conventional flight director, although the format was an intentionally confused presentation that resulted in a very cluttered dynamic display. The subject's task was to use a hand controller to keep a tracking symbol, an 'X', on top of a target symbol, another X, which was being randomly driven. In the basic tracking task, both the target symbol and the tracking symbol were presented as red X's. The presence of color coding was used to provide some declutter, thus making the task more reasonable to perform. For this condition, the target symbol was coded red, and the tracking symbol was coded blue. Noise conditions, or additional clutter, were provided by the inclusion of randomly moving, differently colored X symbols. Stereo depth, which was hypothesized to declutter the display, was utilized by placing any noise in a plane in front of the display monitor, the tracking symbol at screen depth, and the target symbol behind the screen. The results from analyzing the performances of eight subjects revealed that the stereo presentation effectively offsets the cluttering effects of both the noise and the absence of color coding. The potential of stereo cueing to declutter complex informational displays has therefore been verified; this ability to declutter is an additional benefit from the application of stereoptic cueing to pictorial flight displays.

  2. Handheld underwater 3D sensor based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Heinze, Matthias; Schmidt, Ingo; Meng, Lichun; Ramm, Roland; Kühmstedt, Peter; Notni, Gunther

    2015-05-01

    A new, handheld 3D surface scanner was developed especially for underwater use until a diving depth of about 40 meters. Additionally, the sensor is suitable for the outdoor use under bad weather circumstance like splashing water, wind, and bad illumination conditions. The optical components of the sensor are two cameras and one projector. The measurement field is about 250 mm x 200 mm. The depth resolution is about 50 μm and the lateral resolution is approximately 150 μm. The weight of the scanner is about 10 kg. The housing was produced of synthetic powder using a 3D printing technique. The measurement time for one scan is between a third and a half second. The computer for measurement control and data analysis is already integrated into the housing of the scanner. A display on the backside presents the results of each measurement graphically for a real-time evaluation of the user during the recording of the measurement data.

  3. 3-D measuring of engine camshaft based on machine vision

    NASA Astrophysics Data System (ADS)

    Qiu, Jianxin; Tan, Liang; Xu, Xiaodong

    2008-12-01

    The non-touch 3D measuring based on machine vision is introduced into camshaft precise measuring. Currently, because CCD 3-dimensional measuring can't meet requirements for camshaft's measuring precision, it's necessary to improve its measuring precision. In this paper, we put forward a method to improve the measuring method. A Multi-Character Match method based on the Polygonal Non-regular model is advanced with the theory of Corner Extraction and Corner Matching .This method has solved the problem of the matching difficulty and a low precision. In the measuring process, the use of the Coded marked Point method and Self-Character Match method can bring on this problem. The 3D measuring experiment on camshaft, which based on the Multi-Character Match method of the Polygonal Non-regular model, proves that the normal average measuring precision is increased to a new level less than 0.04mm in the point-clouds photo merge. This measuring method can effectively increase the 3D measuring precision of the binocular CCD.

  4. On the Uncertain Future of the Volumetric 3D Display Paradigm

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2017-06-01

    Volumetric displays permit electronically processed images to be depicted within a transparent physical volume and enable a range of cues to depth to be inherently associated with image content. Further, images can be viewed directly by multiple simultaneous observers who are able to change vantage positions in a natural way. On the basis of research to date, we assume that the technologies needed to implement useful volumetric displays able to support translucent image formation are available. Consequently, in this paper we review aspects of the volumetric paradigm and identify important issues which have, to date, precluded their successful commercialization. Potentially advantageous characteristics are outlined and demonstrate that significant research is still needed in order to overcome barriers which continue to hamper the effective exploitation of this display modality. Given the recent resurgence of interest in developing commercially viable general purpose volumetric systems, this discussion is of particular relevance.

  5. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  6. Projection-slice theorem based 2D-3D registration

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  7. A zero-footprint 3D visualization system utilizing mobile display technology for timely evaluation of stroke patients

    NASA Astrophysics Data System (ADS)

    Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent

    2010-03-01

    When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.

  8. 3D shape measurement with phase correlation based fringe projection

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Munckelt, Christoph; Heinze, Matthias; Bräuer-Burchardt, Christian; Notni, Gunther

    2007-06-01

    Here we propose a method for 3D shape measurement by means of phase correlation based fringe projection in a stereo arrangement. The novelty in the approach is characterized by following features. Correlation between phase values of the images of two cameras is used for the co-ordinate calculation. This work stands in contrast to the sole usage of phase values (phasogrammetry) or classical triangulation (phase values and image co-ordinates - camera raster values) for the determination of the co-ordinates. The method's main advantage is the insensitivity of the 3D-coordinates from the absolute phase values. Thus it prevents errors in the determination of the co-ordinates and improves robustness in areas with interreflections artefacts and inhomogeneous regions of intensity. A technical advantage is the fact that the accuracy of the 3D co-ordinates does not depend on the projection resolution. Thus the achievable quality of the 3D co-ordinates can be selectively improved by the use of high quality camera lenses and can participate in improvements in modern camera technologies. The presented new solution of the stereo based fringe projection with phase correlation makes a flexible, errortolerant realization of measuring systems within different applications like quality control, rapid prototyping, design and CAD/CAM possible. In the paper the phase correlation method will be described in detail. Furthermore, different realizations will be shown, i.e. a mobile system for the measurement of large objects and an endoscopic like system for CAD/CAM in dental industry.

  9. Abdominal aortic aneurysm imaging with 3-D ultrasound: 3-D-based maximum diameter measurement and volume quantification.

    PubMed

    Long, A; Rouet, L; Debreuve, A; Ardon, R; Barbe, C; Becquemin, J P; Allaire, E

    2013-08-01

    The clinical reliability of 3-D ultrasound imaging (3-DUS) in quantification of abdominal aortic aneurysm (AAA) was evaluated. B-mode and 3-DUS images of AAAs were acquired for 42 patients. AAAs were segmented. A 3-D-based maximum diameter (Max3-D) and partial volume (Vol30) were defined and quantified. Comparisons between 2-D (Max2-D) and 3-D diameters and between orthogonal acquisitions were performed. Intra- and inter-observer reproducibility was evaluated. Intra- and inter-observer coefficients of repeatability (CRs) were less than 5.18 mm for Max3-D. Intra-observer and inter-observer CRs were respectively less than 6.16 and 8.71 mL for Vol30. The mean of normalized errors of Vol30 was around 7%. Correlation between Max2-D and Max3-D was 0.988 (p < 0.0001). Max3-D and Vol30 were not influenced by a probe rotation of 90°. Use of 3-DUS to quantify AAA is a new approach in clinical practice. The present study proposed and evaluated dedicated parameters. Their reproducibility makes the technique clinically reliable.

  10. A flexible fast 3D profilometry based on modulation measurement

    NASA Astrophysics Data System (ADS)

    Dou, Yunfu; Su, Xianyu; Chen, Yanfei; Wang, Ying

    2011-03-01

    This paper proposes a flexible fast profilometry based on modulation measurement. Two orthogonal gratings through a beam splitter are vertically projected on an object surface, and the measured object is placed between the imaging planes of the two gratings. Then the image of the object surface modulated by the orthogonal gratings can be obtained by a CCD camera in the same direction as the grating projection. This image is processed by the operations consisting of performing the Fourier transform, spatial frequency filtering and inverse Fourier transform. Using the modulation distributions of two grating patterns, we can reconstruct the 3D shape of the object. In the measurement process, we only need to capture one fringe pattern, so it is faster than the MMP and remains the advantages of it. In the article, the principle of this method, the setup of the measurement system, some simulations and primary experiment results are given. The simulative and experimental result proves it can restore the 3D shape of the complex object fast and comparatively accurate. Because only one fringe pattern is needed in the testing, our method has a promising extensive application prospect in real-time acquiring and dynamic measurement of 3D data of complex objects.

  11. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  12. An endoscopic 3D scanner based on structured light.

    PubMed

    Schmalz, Christoph; Forster, Frank; Schick, Anton; Angelopoulou, Elli

    2012-07-01

    We present a new endoscopic 3D scanning system based on Single Shot Structured Light. The proposed design makes it possible to build an extremely small scanner. The sensor head contains a catadioptric camera and a pattern projection unit. The paper describes the working principle and calibration procedure of the sensor. The prototype sensor head has a diameter of only 3.6mm and a length of 14mm. It is mounted on a flexible shaft. The scanner is designed for tubular cavities and has a cylindrical working volume of about 30mm length and 30mm diameter. It acquires 3D video at 30 frames per second and typically generates approximately 5000 3D points per frame. By design, the resolution varies over the working volume, but is generally better than 200μm. A prototype scanner has been built and is evaluated in experiments with phantoms and biological samples. The recorded average error on a known test object was 92μm.

  13. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  14. Automated System for Holographic Lightfield 3D Display Metrology (HL3DM)

    DTIC Science & Technology

    2015-04-01

    for example an array of lenses. Figure 3 shows examples of rays that are hitting two types of surfaces: (a) Diffused surface (left side), which is...color-photometer that has focusing optics. 2.8.4 Array Detectors (Cameras). (a) Photometric cameras will be the most useful instrument for the type of...displays that we intend to measure. (b) This includes cameras with multiple sensors array , of any of the commercial technology (CCD, CMOS, etc

  15. Surface topography study of prepared 3D printed moulds via 3D printer for silicone elastomer based nasal prosthesis

    NASA Astrophysics Data System (ADS)

    Abdullah, Abdul Manaf; Din, Tengku Noor Daimah Tengku; Mohamad, Dasmawati; Rahim, Tuan Noraihan Azila Tuan; Akil, Hazizan Md; Rajion, Zainul Ahmad

    2016-12-01

    Conventional prosthesis fabrication is highly depends on the hand creativity of laboratory technologist. The development in 3D printing technology offers a great help in fabricating affordable and fast yet esthetically acceptable prostheses. This study was conducted to discover the potential of 3D printed moulds for indirect silicone elastomer based nasal prosthesis fabrication. Moulds were designed using computer aided design (CAD) software (Solidworks, USA) and converted into the standard tessellation language (STL) file. Three moulds with layer thickness of 0.1, 0.2 and 0.3mm were printed utilizing polymer filament based 3D printer (Makerbot Replicator 2X, Makerbot, USA). Another one mould was printed utilizing liquid resin based 3D printer (Objet 30 Scholar, Stratasys, USA) as control. The printed moulds were then used to fabricate maxillofacial silicone specimens (n=10)/mould. Surface profilometer (Surfcom Flex, Accretech, Japan), digital microscope (KH77000, Hirox, USA) and scanning electron microscope (Quanta FEG 450, Fei, USA) were used to measure the surface roughness as well as the topological properties of fabricated silicone. Statistical analysis of One-Way ANOVA was employed to compare the surface roughness of the fabricated silicone elastomer. Result obtained demonstrated significant differences in surface roughness of the fabricated silicone (p<0.01). Further post hoc analysis also revealed significant differences in silicone fabricated using different 3D printed moulds (p<0.01). A 3D printed mould was successfully prepared and characterized. With surface topography that could be enhanced, inexpensive and rapid mould fabrication techniques, polymer filament based 3D printer is potential for indirect silicone elastomer based nasal prosthesis fabrication.

  16. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  17. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright

    PubMed Central

    Kane, David; Held, Robert T.; Banks, Martin S.

    2012-01-01

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer’s inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer’s head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer’s head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays. PMID:24058723

  18. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright.

    PubMed

    Kane, David; Held, Robert T; Banks, Martin S

    2012-02-09

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer's inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer's head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer's head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays.

  19. Improved Second-Generation 3-D Volumetric Display System. Revision 2

    DTIC Science & Technology

    1998-10-01

    2 mm 2 Watt The factor of 0.7 is used here to account for the 5 14-nm laser wavelength instead of the 555-nm peak of the photopic curve . For a spot...lasers over a 40-minute time period. The spikes in the curves are due to a defective power meter and are not real. The Coherent had virtually single...visible three-dimensional images. A primary element in the helical display system is a rotating helically curved screen, referred to as the "helix

  20. Depth-expression characteristics of multi-projection 3D display systems [invited].

    PubMed

    Park, Soon-gi; Hong, Jong-Young; Lee, Chang-Kun; Miranda, Matheus; Kim, Youngmin; Lee, Byoungho

    2014-09-20

    A multi-projection display consists of multiple projection units. Because of the large amount of data, a multi-projection system shows large, high-quality images. According to the projection geometry and the optical configuration, multi-projection systems show different viewing characteristics for generated three-dimensional images. In this paper, we analyzed the various projection geometries of multi-projection systems, and explained the different depth-expression characteristics for each individual projection geometry. We also demonstrated the depth-expression characteristic of an experimental multi-projection system.

  1. The crystal structure of the dimeric colicin M immunity protein displays a 3D domain swap.

    PubMed

    Usón, Isabel; Patzer, Silke I; Rodríguez, Dayté Dayana; Braun, Volkmar; Zeth, Kornelius

    2012-04-01

    Bacteriocins are proteins secreted by many bacterial cells to kill related bacteria of the same niche. To avoid their own suicide through reuptake of secreted bacteriocins, these bacteria protect themselves by co-expression of immunity proteins in the compartment of colicin destination. In Escherichia coli the colicin M (Cma) is inactivated by the interaction with the Cma immunity protein (Cmi). We have crystallized and solved the structure of Cmi at a resolution of 1.95Å by the recently developed ab initio phasing program ARCIMBOLDO. The monomeric structure of the mature 10kDa protein comprises a long N-terminal α-helix and a four-stranded C-terminal β-sheet. Dimerization of this fold is mediated by an extended interface of hydrogen bond interactions between the α-helix and the four-stranded β-sheet of the symmetry related molecule. Two intermolecular disulfide bridges covalently connect this dimer to further lock this complex. The Cmi protein resembles an example of a 3D domain swapping being stalled through physical linkage. The dimer is a highly charged complex with a significant surplus of negative charges presumably responsible for interactions with Cma. Dimerization of Cmi was also demonstrated to occur in vivo. Although the Cmi-Cma complex is unique among bacteria, the general fold of Cmi is representative for a class of YebF-like proteins which are known to be secreted into the external medium by some Gram-negative bacteria.

  2. Modeling approaches for ligand-based 3D similarity.

    PubMed

    Tresadern, Gary; Bemporad, Daniele

    2010-10-01

    3D ligand-based similarity approaches are widely used in the early phases of drug discovery for tasks such as hit finding by virtual screening or compound design with quantitative structure-activity relationships. Here in we review widely used software for performing such tasks. Some techniques are based on relatively mature technology, shape-based similarity for instance. Typically, these methods remained in the realm of the expert user, the experienced modeler. However, advances in implementation and speed have improved usability and allow these methods to be applied to databases comprising millions of compounds. There are now many reports of such methods impacting drug-discovery projects. As such, the medicinal chemistry community has become the intended market for some of these new tools, yet they may consider the wide array and choice of approaches somewhat disconcerting. Each method has subtle differences and is better suited to certain tasks than others. In this article we review some of the widely used computational methods via application, provide straightforward background on the underlying theory and provide examples for the interested reader to pursue in more detail. In the new era of preclinical drug discovery there will be ever more pressure to move faster and more efficiently, and computational approaches based on 3D ligand similarity will play an increasing role in in this process.

  3. Fast vision-based catheter 3D reconstruction.

    PubMed

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D

    2016-07-21

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  4. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  5. EEG-based usability assessment of 3D shutter glasses

    NASA Astrophysics Data System (ADS)

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  6. Autostereoscopic three-dimensional display based on two parallax barriers.

    PubMed

    Luo, Jiang-Yong; Wang, Qiong-Hua; Zhao, Wu-Xiang; Li, Da-Hai

    2011-06-20

    An autostereoscopic three-dimensional (3D) display composed of a flat-panel display, two parallax barriers, and a backlight panel is proposed. Parallax barrier 1, located between the backlight panel and the flat-panel display, divides the lights to create the perception of stereoscopic images. Parallax barrier 2, located between the flat-panel display and the viewers, acts as the function of decreasing the cross talk of the stereoscopic images. The operation principle of the display and the calculation equations for the parallax barriers are described in detail. An autostereoscopic 3D display prototype is developed. The prototype presents high-quality stereoscopic images. At the optimal viewing distance, it presents stereoscopic images without cross talk. At other viewing distances, it has less cross talk than a conventional autostereoscopic 3D display based on one parallax.

  7. Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens.

    PubMed

    Shen, Xin; Wang, Yu-Jen; Chen, Hung-Shan; Xiao, Xiao; Lin, Yi-Hsin; Javidi, Bahram

    2015-02-15

    We present a three dimensional (3D) micro integral imaging display system with extended depth of focus by using a polarized bifocal liquid crystal lens. This lens and other optical components are combined as the relay optical element. The focal length of the relay optical element can be controlled to project an elemental image array in multiple positions with various lenslet image planes, by applying different voltages to the liquid crystal lens. The depth of focus of the proposed system can therefore be extended. The feasibility of our proposed system is experimentally demonstrated. In our experiments, the depth of focus of the display system is extended from 3.82 to 109.43 mm.

  8. Appearance-based color face recognition with 3D model

    NASA Astrophysics Data System (ADS)

    Wang, Chengzhang; Bai, Xiaoming

    2013-03-01

    Appearance-based face recognition approaches explore color cues of face images, i.e. grey or color information for recognition task. They first encode color face images, and then extract facial features for classification. Similar to conventional singular value decomposition, hypercomplex matrix also exists singular value decomposition on hypercomplex field. In this paper, a novel color face recognition approach based on hypercomplex singular value decomposition is proposed. The approach employs hypercomplex to encode color face information of different channels simultaneously. Hypercomplex singular value decomposition is utilized then to compute the basis vectors of the color face subspace. To improve learning efficiency of the algorithm, 3D active deformable model is exploited to generate virtual face images. Color face samples are projected onto the subspace and projection coefficients are utilized as facial features. Experimental results on CMU PIE face database verify the effectiveness of the proposed approach.

  9. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  10. Tri-color composite volume H-PDLC grating and its application to 3D color autostereoscopic display.

    PubMed

    Wang, Kangni; Zheng, Jihong; Gao, Hui; Lu, Feiyue; Sun, Lijia; Yin, Stuart; Zhuang, Songlin

    2015-11-30

    A tri-color composite volume holographic polymer dispersed liquid crystal (H-PDLC) grating and its application to 3-dimensional (3D) color autostereoscopic display are reported in this paper. The composite volume H-PDLC grating consists of three different period volume H-PDLC sub-gratings. The longer period diffracts red light, the medium period diffracts the green light, and the shorter period diffracts the blue light. To record three different period gratings simultaneously, two photoinitiators are employed. The first initiator consists of methylene blue and p-toluenesulfonic acid and the second initiator is composed of Rose Bengal and N-phenyglycine. In this case, the holographic recording medium is sensitive to entire visible wavelengths, including red, green, and blue so that the tri-color composite grating can be written simultaneously by harnessing three different color laser beams. In the experiment, the red beam comes from a He-Ne laser with an output wavelength of 632.8 nm, the green beam comes from a Verdi solid state laser with an output wavelength of 532 nm, and the blue beam comes from a He-Cd laser with an output wavelength of 441.6 nm. The experimental results show that diffraction efficiencies corresponding to red, green, and blue colors are 57%, 75% and 33%, respectively. Although this diffraction efficiency is not perfect, it is high enough to demonstrate the effect of 3D color autostereoscopic display.

  11. 3D modeling based on CityEngine

    NASA Astrophysics Data System (ADS)

    Jia, Guangyin; Liao, Kaiju

    2017-03-01

    Currently, there are many 3D modeling softwares, like 3DMAX, AUTOCAD, and more populous BIM softwares represented by REVIT. CityEngine modeling software introduced in this paper can fully utilize the existing GIS data and combine other built models to make 3D modeling on internal and external part of buildings in a rapid and batch manner, so as to improve the 3D modeling efficiency.

  12. 3D Gabor wavelet based vessel filtering of photoacoustic images.

    PubMed

    Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi

    2016-08-01

    Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.

  13. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  14. Hydrogel-based reinforcement of 3D bioprinted constructs

    PubMed Central

    Levato, R; Peiffer, Q C; de Ruijter, M; Hennink, W E; Vermonden, T; Malda, J

    2016-01-01

    Progress within the field of biofabrication is hindered by a lack of suitable hydrogel formulations. Here, we present a novel approach based on a hybrid printing technique to create cellularized 3D printed constructs. The hybrid bioprinting strategy combines a reinforcing gel for mechanical support with a bioink to provide a cytocompatible environment. In comparison with thermoplastics such as є-polycaprolactone, the hydrogel-based reinforcing gel platform enables printing at cell-friendly temperatures, targets the bioprinting of softer tissues and allows for improved control over degradation kinetics. We prepared amphiphilic macromonomers based on poloxamer that form hydrolysable, covalently cross-linked polymer networks. Dissolved at a concentration of 28.6%w/w in water, it functions as reinforcing gel, while a 5%w/w gelatin-methacryloyl based gel is utilized as bioink. This strategy allows for the creation of complex structures, where the bioink provides a cytocompatible environment for encapsulated cells. Cell viability of equine chondrocytes encapsulated within printed constructs remained largely unaffected by the printing process. The versatility of the system is further demonstrated by the ability to tune the stiffness of printed constructs between 138 and 263 kPa, as well as to tailor the degradation kinetics of the reinforcing gel from several weeks up to more than a year. PMID:27431861

  15. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    NASA Astrophysics Data System (ADS)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite

  16. Energy harvesting “3-D knitted spacer” based piezoelectric textiles

    NASA Astrophysics Data System (ADS)

    Anand, S.; Soin, N.; Shah, T. H.; Siores, E.

    2016-07-01

    The piezoelectric effect in Poly(vinylidene fluoride), PVDF, was discovered over four decades ago and since then, significant work has been carried out aiming at the production of high p-phase fibres and their integration into fabric structures for energy harvesting. However, little work has been done in the area of production of “true piezoelectric fabric structures” based on flexible polymeric materials such as PVDF. In this work, we demonstrate “3-D knitted spacer” technology based all-fibre piezoelectric fabrics as power generators and energy harvesters. The knitted single-structure piezoelectric generator consists of high p-phase (~80%) piezoelectric PVDF monofilaments as the spacer yarn interconnected between silver (Ag) coated polyamide multifilament yarn layers acting as the top and bottom electrodes. The novel and unique textile structure provides an output power density in the range of 1.105.10 gWcm-2 at applied impact pressures in the range of 0.02-0.10 MPa, thus providing significantly higher power outputs and efficiencies over the existing 2-D woven and nonwoven piezoelectric structures. The high energy efficiency, mechanical durability and comfort of the soft, flexible and all-fibre based power generator is highly attractive for a variety of potential applications such as wearable electronic systems and energy harvesters charged from ambient environment or by human movement.

  17. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  18. A New Display Format Relating Azimuth-Scanning Radar Data and All-Sky Images in 3-D

    NASA Technical Reports Server (NTRS)

    Swartz, Wesley E.; Seker, Ilgin; Mathews, John D.; Aponte, Nestor

    2010-01-01

    Here we correlate features in a sequence of all-sky images of 630 nm airglow with the three-dimensional (3-D) structure of electron densities in the F region above Arecibo. Pairs of 180 azimuth scans (using the Gregorian and line feeds) of the two-beam incoherent scatter radar (ISR) have been plotted in cone pictorials of the line-of-sight electron densities. The plots include projections of the 630 nm airglow onto the ground using the same spatial scaling as for the ISR data. Selected sequential images from the night of 16-17 June 2004 correlate ionospheric plasma features with scales comparable to the ISR density-cone diameter. The entire set of over 100 images spanning about eight hours is available as a movie. The correlation between the airglow and the electron densities is not unexpected, but the new display format shows the 3-D structures better than separate 2-D plots in latitude and longitude for the airglow and in height and time for the electron densities. Furthermore, the animations help separate the bands of airglow from obscuring clouds and the star field.

  19. Gis-Based Smart Cartography Using 3d Modeling

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Tassetti, A. N.

    2013-08-01

    3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.

  20. Cranial Base Superimposition for 3D Evaluation of Soft Tissue Changes

    PubMed Central

    Cevidanes, Lucia H.C.; Motta, Alexandre; Proffit, William R.; Ackerman, James L.; Styner, Martin

    2009-01-01

    The recent emphasis on soft tissues as the limiting factor in treatment and on soft tissue relationships in establishing the goals of treatment has made 3D analysis of soft tissues more important in diagnosis and treatment planning. It is equally important to be able to detect changes in the facial soft tissues produced by growth and/or treatment. This requires structures of reference for superimposition, and a way to display the changes with quantitative information. This paper outlines a technique for quantifying facial soft tissue changes as viewed in CBCT data, using fully-automated voxel-wise registration of the cranial base surface. The assessment of change of soft tissues is done via calculation of the Euclidean surface distances between the 3D models. Color maps are used for visual assessment of the location and quantification of changes. This methodology allows a detailed examination of soft tissue changes with growth and/or treatment. Because of the lack of stable references with 3D photogrammetry, 3D photography and laser scanning, soft tissue changes cannot be accurately quantified by these methods. PMID:20381752

  1. A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

    PubMed Central

    Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

    2014-01-01

    We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

  2. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  3. Collaboration on Scene Graph Based 3D Data

    NASA Astrophysics Data System (ADS)

    Ammon, Lorenz; Bieri, Hanspeter

    Professional 3D digital content creation tools, like Alias Maya or discreet 3ds max, offer only limited support for a team of artists to work on a 3D model collaboratively. We present a scene graph repository system that enables fine-grained collaboration on scenes built using standard 3D DCC tools by applying the concept of collaborative versions to a general attributed scene graph. Artists can work on the same scene in parallel without locking out each other. The artists' changes to a scene are regularly merged to ensure that all artists can see each others progress and collaborate on current data. We introduce the concept of indirect changes and indirect conflicts to systematically inspect the effects that collaborative changes have on a scene. Inspecting indirect conflicts helps maintaining scene consistency by systematically looking for inconsistencies at the right places.

  4. 3-dimensional (3D) fabricated polymer based drug delivery systems.

    PubMed

    Moulton, Simon E; Wallace, Gordon G

    2014-11-10

    Drug delivery from 3-dimensional (3D) structures is a rapidly growing area of research. It is essential to achieve structures wherein drug stability is ensured, the drug loading capacity is appropriate and the desired controlled release profile can be attained. Attention must also be paid to the development of appropriate fabrication machinery that allows 3D drug delivery systems (DDS) to be produced in a simple, reliable and reproducible manner. The range of fabrication methods currently being used to form 3D DDSs include electrospinning (solution and melt), wet-spinning and printing (3-dimensional). The use of these techniques enables production of DDSs from the macro-scale down to the nano-scale. This article reviews progress in these fabrication techniques to form DDSs that possess desirable drug delivery kinetics for a wide range of applications.

  5. Axisymmetric Implementation for 3D-Based DSMC Codes

    NASA Technical Reports Server (NTRS)

    Stewart, Benedicte; Lumpkin, F. E.; LeBeau, G. J.

    2011-01-01

    The primary objective in developing NASA s DSMC Analysis Code (DAC) was to provide a high fidelity modeling tool for 3D rarefied flows such as vacuum plume impingement and hypersonic re-entry flows [1]. The initial implementation has been expanded over time to offer other capabilities including a novel axisymmetric implementation. Because of the inherently 3D nature of DAC, this axisymmetric implementation uses a 3D Cartesian domain and 3D surfaces. Molecules are moved in all three dimensions but their movements are limited by physical walls to a small wedge centered on the plane of symmetry (Figure 1). Unfortunately, far from the axis of symmetry, the cell size in the direction perpendicular to the plane of symmetry (the Z-direction) may become large compared to the flow mean free path. This frequently results in inaccuracies in these regions of the domain. A new axisymmetric implementation is presented which aims to solve this issue by using Bird s approach for the molecular movement while preserving the 3D nature of the DAC software [2]. First, the computational domain is similar to that previously used such that a wedge must still be used to define the inflow surface and solid walls within the domain. As before molecules are created inside the inflow wedge triangles but they are now rotated back to the symmetry plane. During the move step, molecules are moved in 3D but instead of interacting with the wedge walls, the molecules are rotated back to the plane of symmetry at the end of the move step. This new implementation was tested for multiple flows over axisymmetric shapes, including a sphere, a cone, a double cone and a hollow cylinder. Comparisons to previous DSMC solutions and experiments, when available, are made.

  6. 3D Clumped Cell Segmentation Using Curvature Based Seeded Watershed

    PubMed Central

    Atta-Fosu, Thomas; Guo, Weihong; Jeter, Dana; Mizutani, Claudia M.; Stopczynski, Nathan; Sousa-Neves, Rui

    2017-01-01

    Image segmentation is an important process that separates objects from the background and also from each other. Applied to cells, the results can be used for cell counting which is very important in medical diagnosis and treatment, and biological research that is often used by scientists and medical practitioners. Segmenting 3D confocal microscopy images containing cells of different shapes and sizes is still challenging as the nuclei are closely packed. The watershed transform provides an efficient tool in segmenting such nuclei provided a reasonable set of markers can be found in the image. In the presence of low-contrast variation or excessive noise in the given image, the watershed transform leads to over-segmentation (a single object is overly split into multiple objects). The traditional watershed uses the local minima of the input image and will characteristically find multiple minima in one object unless they are specified (marker-controlled watershed). An alternative to using the local minima is by a supervised technique called seeded watershed, which supplies single seeds to replace the minima for the objects. Consequently, the accuracy of a seeded watershed algorithm relies on the accuracy of the predefined seeds. In this paper, we present a segmentation approach based on the geometric morphological properties of the ‘landscape’ using curvatures. The curvatures are computed as the eigenvalues of the Shape matrix, producing accurate seeds that also inherit the original shape of their respective cells. We compare with some popular approaches and show the advantage of the proposed method. PMID:28280723

  7. Microseismic network design assessment based on 3D ray tracing

    NASA Astrophysics Data System (ADS)

    Näsholm, Sven Peter; Wuestefeld, Andreas; Lubrano-Lavadera, Paul; Lang, Dominik; Kaschwich, Tina; Oye, Volker

    2016-04-01

    There is increasing demand on the versatility of microseismic monitoring networks. In early projects, being able to locate any triggers was considered a success. These early successes led to a better understanding of how to extract value from microseismic results. Today operators, regulators, and service providers work closely together in order to find the optimum network design to meet various requirements. In the current study we demonstrate an integrated and streamlined network capability assessment approach. It is intended for use during the microseismic network design process prior to installation. The assessments are derived from 3D ray tracing between a grid of event points and the sensors. Three aspects are discussed: 1) Magnitude of completeness or detection limit; 2) Event location accuracy; and 3) Ground-motion hazard. The network capability parameters 1) and 2) are estimated at all hypothetic event locations and are presented in the form of maps given a seismic sensor coordinate scenario. In addition, the ray tracing traveltimes permit to estimate the point-spread-functions (PSFs) at the event grid points. PSFs are useful in assessing the resolution and focusing capability of the network for stacking-based event location and imaging methods. We estimate the performance for a hypothetical network case with 11 sensors. We consider the well-documented region around the San Andreas Fault Observatory at Depth (SAFOD) located north of Parkfield, California. The ray tracing is done through a detailed velocity model which covers a 26.2 by 21.2 km wide area around the SAFOD drill site with a resolution of 200 m both for the P-and S-wave velocities. Systematic network capability assessment for different sensor site scenarios prior to installation facilitates finding a final design which meets the survey objectives.

  8. Texture-Based Correspondence Display

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael

    2004-01-01

    Texture-based correspondence display is a methodology to display corresponding data elements in visual representations of complex multidimensional, multivariate data. Texture is utilized as a persistent medium to contain a visual representation model and as a means to create multiple renditions of data where color is used to identify correspondence. Corresponding data elements are displayed over a variety of visual metaphors in a normal rendering process without adding extraneous linking metadata creation and maintenance. The effectiveness of visual representation for understanding data is extended to the expression of the visual representation model in texture.

  9. Overestimation of heights in virtual reality is influenced more by perceived distal size than by the 2-D versus 3-D dimensionality of the display

    NASA Technical Reports Server (NTRS)

    Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)

    2002-01-01

    One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.

  10. Face recognition based on matching of local features on 3D dynamic range sequences

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  11. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  12. Silk-based anisotropical 3D biotextiles for bone regeneration.

    PubMed

    Ribeiro, Viviana P; Silva-Correia, Joana; Nascimento, Ana I; da Silva Morais, Alain; Marques, Alexandra P; Ribeiro, Ana S; Silva, Carla J; Bonifácio, Graça; Sousa, Rui A; Oliveira, Joaquim M; Oliveira, Ana L; Reis, Rui L

    2017-04-01

    Bone loss in the craniofacial complex can been treated using several conventional therapeutic strategies that face many obstacles and limitations. In this work, novel three-dimensional (3D) biotextile architectures were developed as a possible strategy for flat bone regeneration applications. As a fully automated processing route, this strategy as potential to be easily industrialized. Silk fibroin (SF) yarns were processed into weft-knitted fabrics spaced by a monofilament of polyethylene terephthalate (PET). A comparative study with a similar 3D structure made entirely of PET was established. Highly porous scaffolds with homogeneous pore distribution were observed using micro-computed tomography analysis. The wet state dynamic mechanical analysis revealed a storage modulus In the frequency range tested, the storage modulus values obtained for SF-PET scaffolds were higher than for the PET scaffolds. Human adipose-derived stem cells (hASCs) cultured on the SF-PET spacer structures showed the typical pattern for ALP activity under osteogenic culture conditions. Osteogenic differentiation of hASCs on SF-PET and PET constructs was also observed by extracellular matrix mineralization and expression of osteogenic-related markers (osteocalcin, osteopontin and collagen type I) after 28 days of osteogenic culture, in comparison to the control basal medium. The quantification of convergent macroscopic blood vessels toward the scaffolds by a chick chorioallantoic membrane assay, showed higher angiogenic response induced by the SF-PET textile scaffolds than PET structures and gelatin sponge controls. Subcutaneous implantation in CD-1 mice revealed tissue ingrowth's accompanied by blood vessels infiltration in both spacer constructs. The structural adaptability of textile structures combined to the structural similarities of the 3D knitted spacer fabrics to craniofacial bone tissue and achieved biological performance, make these scaffolds a possible solution for tissue

  13. Omnidirectional multiview three-dimensional display based on direction-selective light-emitting diode array

    NASA Astrophysics Data System (ADS)

    Yan, Caijie; Liu, Xu; Liu, Di; Xie, Jing; Xia, Xin Xing; Li, Haifeng

    2011-03-01

    A volumetric display system based on a rotating light-emitting diode (LED) array panel can realize a three-dimensional (3-D) display truthfully in the space, but the drawback is missing the occlusion of a 3-D image. We propose an omnidirectional 3-D display with correct occlusion based on a direction-selective LED array panel, which is realized by setting a direction-convergent diaphragm array in front of the LED array. Every diaphragm restricts a light-emitting characteristic of every LED. By using direction-convergent diaphragm array, the observer around the display system can only see one image displayed by the LED array at the corresponding position. With the high-speed rotation of the LED panel, a series of views of a 3-D scene are displayed every angle patch in one circle. We set up an acquisition system to record 180 views of the 3-D scene with a rotating camera along a circle, and then the 180 images are displayed sequentially on the rotating direction-selective LED array to get a 360 deg 3-D display. This 3-D display technology has two main advantages: easy to get viewer-position-dependent correct occlusion and simplify the 3-D data preprocessing process which is helpful to real-time 3-D display.

  14. US-CT 3D dual imaging by mutual display of the same sections for depicting minor changes in hepatocellular carcinoma.

    PubMed

    Fukuda, Hiroyuki; Ito, Ryu; Ohto, Masao; Sakamoto, Akio; Otsuka, Masayuki; Togawa, Akira; Miyazaki, Masaru; Yamagata, Hitoshi

    2012-09-01

    The purpose of this study was to evaluate the usefulness of ultrasound-computed tomography (US-CT) 3D dual imaging for the detection of small extranodular growths of hepatocellular carcinoma (HCC). The clinical and pathological profiles of 10 patients with single nodular type HCC with extranodular growth (extranodular growth) who underwent a hepatectomy were evaluated using two-dimensional (2D) ultrasonography (US), three-dimensional (3D) US, 3D computed tomography (CT) and 3D US-CT dual images. Raw 3D data was converted to DICOM (Digital Imaging and Communication in Medicine) data using Echo to CT (Toshiba Medical Systems Corp., Tokyo, Japan), and the 3D DICOM data was directly transferred to the image analysis system (ZioM900, ZIOSOFT Inc., Tokyo, Japan). By inputting the angle number (x, y, z) of the 3D CT volume data into the ZioM900, multiplanar reconstruction (MPR) images of the 3D CT data were displayed in a manner such that they resembled the conventional US images. Eleven extranodular growths were detected pathologically in 10 cases. 2D US was capable of depicting only 2 of the 11 extranodular growths. 3D CT was capable of depicting 4 of the 11 extranodular growths. On the other hand, 3D US was capable of depicting 10 of the 11 extranodular growths, and 3D US-CT dual images, which enable the dual analysis of the CT and US planes, revealed all 11 extranodular growths. In conclusion, US-CT 3D dual imaging may be useful for the detection of small extranodular growths.

  15. Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene

    NASA Technical Reports Server (NTRS)

    Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing

    2011-01-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.

  16. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present.

  17. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  18. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  19. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  20. Hybrid atlas-based and image-based approach for segmenting 3D brain MRIs

    NASA Astrophysics Data System (ADS)

    Bueno, Gloria; Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul

    2001-07-01

    This work is a contribution to the problem of localizing key cerebral structures in 3D MRIs and its quantitative evaluation. In pursuing it, the cooperation between an image-based segmentation method and a hierarchical deformable registration approach has been considered. The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions of an image, I(s), from the data set are identified. These regions, M(I), are obtained combining information from deformable atlas, achieved by the warping of eight previous labeled maps on I(s). Then, the goal of the decision stage is to precisely locate the contours of the 3D regions set by the markers. This contour decision is performed by a 3D extension of the watershed transform. The anatomical structures taken into consideration and embedded into the atlas are brain, ventricles, corpus callosum, cerebellum, right and left hippocampus, medulla and midbrain. The hybrid method operates fully automatically and in 3D, successfully providing segmented brain structures. The quality of the segmentation has been studied in terms of the detected volume ratio by using kappa statistic and ROC analysis. Results of the method are shown and validated on a 3D MRI phantom. This study forms part of an on-going long term research aiming at the creation of a 3D probabilistic multi-purpose anatomical brain atlas.

  1. New 3-D microarray platform based on macroporous polymer monoliths.

    PubMed

    Rober, M; Walter, J; Vlakh, E; Stahl, F; Kasper, C; Tennikova, T

    2009-06-30

    Polymer macroporous monoliths are widely used as efficient sorbents in different, mostly dynamic, interphase processes. In this paper, monolithic materials strongly bound to the inert glass surface are suggested as operative matrices at the development of three-dimensional (3-D) microarrays. For this purpose, several rigid macroporous copolymers differed by reactivity and hydrophobic-hydrophilic properties were synthesized and tested: (1) glycidyl methacrylate-co-ethylene dimethacrylate (poly(GMA-co-EDMA)), (2) glycidyl methacrylate-co-glycerol dimethacrylate (poly(GMA-co-GDMA)), (3) N-hydroxyphthalimide ester of acrylic acid-co-glycidyl methacrylate-co-ethylene dimethacrylate (poly(HPIEAA-co-GMA-co-EDMA)), (4) 2-cyanoethyl methacrylate-co-ethylene dimethacrylate (poly(CEMA-co-EDMA)), and (5) 2-cyanoethyl methacrylate-co-2-hydroxyethyl methacrylate-co-ethylene dimethacrylate (poly(CEMA-co-HEMA-co-EDMA)). The constructed devices were used as platforms for protein microarrays construction and model mouse IgG-goat anti-mouse IgG affinity pair was used to demonstrate the potential of developed test-systems, as well as to optimize microanalytical conditions. The offered microarray platforms were applied to detect the bone tissue marker osteopontin directly in cell culture medium.

  2. Contactless operating table control based on 3D image processing.

    PubMed

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  3. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  4. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  5. Tour of the World’s Largest 3D Printed Polymer Structure on Display at IBS 2016

    SciTech Connect

    Green, Johney

    2016-01-22

    ORNL’s Johney Green guides a Periscope tour of the 3D printed house and vehicle demonstration called AMIE (Additive Manufacturing Integrated Energy) during the International Builders’ Show 2016 in Las Vegas. See the world’s largest 3D printed polymer structure – made with carbon fiber reinforced ABS plastic, insulated with next-generation vacuum insulation panels, and outfitted with a micro-kitchen by GE Appliances – that was designed to be powered by a 3D printed utility vehicle using bidirectional wireless power technology. Learn more about AMIE at https://www.youtube.com/watch?v=RCkQB... and http://www.ornl.gov/amie.

  6. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited VISION-BASED 3D ...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE VISION-BASED 3D MOTION ESTIMATION FOR ON-ORBIT PROXIMITY SATELLITE TRACKING

  7. 3D-2D ultrasound feature-based registration for navigated prostate biopsy: a feasibility study.

    PubMed

    Selmi, Sonia Y; Promayon, Emmanuel; Troccaz, Jocelyne

    2016-08-01

    The aim of this paper is to describe a 3D-2D ultrasound feature-based registration method for navigated prostate biopsy and its first results obtained on patient data. A system combining a low-cost tracking system and a 3D-2D registration algorithm was designed. The proposed 3D-2D registration method combines geometric and image-based distances. After extracting features from ultrasound images, 3D and 2D features within a defined distance are matched using an intensity-based function. The results are encouraging and show acceptable errors with simulated transforms applied on ultrasound volumes from real patients.

  8. Disparity pattern-based autostereoscopic 3D metrology system for in situ measurement of microstructured surfaces.

    PubMed

    Li, Da; Cheung, Chi Fai; Ren, MingJun; Whitehouse, David; Zhao, Xing

    2015-11-15

    This paper presents a disparity pattern-based autostereoscopic (DPA) 3D metrology system that makes use of a microlens array to capture raw 3D information of the measured surface in a single snapshot through a CCD camera. Hence, a 3D digital model of the target surface with the measuring data is generated through a system-associated direct extraction of disparity information (DEDI) method. The DEDI method is highly efficient for performing the direct 3D mapping of the target surface based on tomography-like operation upon every depth plane with the defocused information excluded. Precise measurement results are provided through an error-elimination process based on statistical analysis. Experimental results show that the proposed DPA 3D metrology system is capable of measuring 3D microstructured surfaces with submicrometer measuring repeatability for high precision and in situ measurement of microstructured surfaces.

  9. A primitive-based 3D object recognition system

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  10. Tour of the World’s Largest 3D Printed Polymer Structure on Display at IBS 2016

    ScienceCinema

    Green, Johney

    2016-07-12

    ORNL’s Johney Green guides a Periscope tour of the 3D printed house and vehicle demonstration called AMIE (Additive Manufacturing Integrated Energy) during the International Builders’ Show 2016 in Las Vegas. See the world’s largest 3D printed polymer structure – made with carbon fiber reinforced ABS plastic, insulated with next-generation vacuum insulation panels, and outfitted with a micro-kitchen by GE Appliances – that was designed to be powered by a 3D printed utility vehicle using bidirectional wireless power technology. Learn more about AMIE at https://www.youtube.com/watch?v=RCkQB... and http://www.ornl.gov/amie.

  11. Volumetric display system based on three-dimensional scanning of inclined optical image.

    PubMed

    Miyazaki, Daisuke; Shiba, Kensuke; Sotsuka, Koji; Matsushita, Kenji

    2006-12-25

    A volumetric display system based on three-dimensional (3D) scanning of an inclined image is reported. An optical image of a two-dimensional (2D) display, which is a vector-scan display monitor placed obliquely in an optical imaging system, is moved laterally by a galvanometric mirror scanner. Inclined cross-sectional images of a 3D object are displayed on the 2D display in accordance with the position of the image plane to form a 3D image. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision because they are real images formed in a 3D space. Experimental results of volumetric imaging from computed-tomography images and 3D animated images are presented.

  12. [3-D endocardial surface modelling based on the convex hull algorithm].

    PubMed

    Lu, Ying; Xi, Ri-hui; Shen, Hai-dong; Ye, You-li; Zhang, Yong

    2006-11-01

    In this paper, a method based on the convex hull algorithm is presented for extracting modelling data from the locations of catheter electrodes within a cardiac chamber, so as to create a 3-D model of the heart chamber during diastole and to obtain a good result in the 3-D reconstruction of the chamber based on VTK.

  13. A Tetraperylene Diimides Based 3D Nonfullerene Acceptor for Efficient Organic Photovoltaics.

    PubMed

    Liu, Shi-Yong; Wu, Chen-Hao; Li, Chang-Zhi; Liu, Sheng-Qiang; Wei, Kung-Hwa; Chen, Hong-Zheng; Jen, Alex K-Y

    2015-04-01

    A nonfullerene acceptor based on a 3D tetraperylene diimide is developed for bulk heterojunction organic photovoltaics. The disruption of perylene diimide planarity with a 3D framework suppresses the self-aggregation of perylene diimide and inhibits excimer formation. From planar monoperylene diimide to 3D tetraperylene diimide, a significant improvement of power conversion efficiency from 0.63% to 3.54% can be achieved.

  14. Geofencing-Based Localization for 3d Data Acquisition Navigation

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Kamio, T.; Yasojima, H.; Kobayashi, T.

    2016-06-01

    Users require navigation for many location-based applications using moving sensors, such as autonomous robot control, mapping route navigation and mobile infrastructure inspection. In indoor environments, indoor positioning systems using GNSSs can provide seamless indoor-outdoor positioning and navigation services. However, instabilities in sensor position data acquisition remain, because the indoor environment is more complex than the outdoor environment. On the other hand, simultaneous localization and mapping processing is better than indoor positioning for measurement accuracy and sensor cost. However, it is not easy to estimate position data from a single viewpoint directly. Based on these technical issues, we focus on geofencing techniques to improve position data acquisition. In this research, we propose a methodology to estimate more stable position or location data using unstable position data based on geofencing in indoor environments. We verify our methodology through experiments in indoor environments.

  15. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  16. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  17. Building a 3D scanner system based on monocular vision.

    PubMed

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  18. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  19. 3D Mesh Segmentation Based on Markov Random Fields and Graph Cuts

    NASA Astrophysics Data System (ADS)

    Shi, Zhenfeng; Le, Dan; Yu, Liyang; Niu, Xiamu

    3D Mesh segmentation has become an important research field in computer graphics during the past few decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. However, only a few algorithms based on Markov Random Field (MRF) has been presented for 3D object segmentation. In this letter, we present a definition of mesh segmentation according to the labeling problem. Inspired by the capability of MRF combining the geometric information and the topology information of a 3D mesh, we propose a novel 3D mesh segmentation model based on MRF and Graph Cuts. Experimental results show that our MRF-based schema achieves an effective segmentation.

  20. Optical characterization of auto-stereoscopic 3D displays: interest of the resolution and comparison to human eye properties

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2014-02-01

    Optical characterization of multi-view auto-stereoscopic displays is realized using high angular resolution viewing angle measurements and imaging measurements. View to view and global qualified binocular viewing space are computed from viewing angle measurements and verified using imaging measurements. Crosstalk uniformity is also deduced and related to display imperfections.

  1. The Martian Water Cycle Based on 3-D Modeling

    NASA Technical Reports Server (NTRS)

    Houben, H.; Haberle, R. M.; Joshi, M. M.

    1999-01-01

    Understanding the distribution of Martian water is a major goal of the Mars Surveyor program. However, until the bulk of the data from the nominal missions of TES, PMIRR, GRS, MVACS, and the DS2 probes are available, we are bound to be in a state where much of our knowledge of the seasonal behavior of water is based on theoretical modeling. We therefore summarize the results of this modeling at the present time. The most complete calculations come from a somewhat simplified treatment of the Martian climate system which is capable of simulating many decades of weather. More elaborate meteorological models are now being applied to study of the problem. The results show a high degree of consistency with observations of aspects of the Martian water cycle made by Viking MAWD, a large number of ground-based measurements of atmospheric column water vapor, studies of Martian frosts, and the widespread occurrence of water ice clouds. Additional information is contained in the original extended abstract.

  2. 3D, wideband vibro-impacting-based piezoelectric energy harvester

    SciTech Connect

    Yu, Qiangmo; Yang, Jin Yue, Xihai; Yang, Aichao; Zhao, Jiangxin; Zhao, Nian; Wen, Yumei; Li, Ping

    2015-04-15

    An impacting-based piezoelectric energy harvester was developed to address the limitations of the existing approaches in single-dimensional operation as well as a narrow working bandwidth. In the harvester, a spiral cylindrical spring rather than the conventional thin cantilever beam was utilized to extract the external vibration with arbitrary directions, which has the capability to impact the surrounding piezoelectric beams to generate electricity. And the introduced vibro-impacting between the spiral cylindrical spring and multi-piezoelectric-beams resulted in not only a three-dimensional response to external vibration, but also a bandwidth-broadening behavior. The experimental results showed that each piezoelectric beam exhibited a maximum bandwidth of 8 Hz and power of 41 μW with acceleration of 1 g (with g=9.8 ms{sup −2}) along the z-axis, and corresponding average values of 5 Hz and 45 μW with acceleration of 0.6 g in the x-y plane. .

  3. Literary and Historical 3D Digital Game-Based Learning: Design Guidelines

    ERIC Educational Resources Information Center

    Neville, David O.; Shelton, Brett E.

    2010-01-01

    As 3D digital game-based learning (3D-DGBL) for the teaching of literature and history gradually gains acceptance, important questions will need to be asked regarding its method of design, development, and deployment. This article offers a synthesis of contemporary pedagogical, instructional design, new media, and literary-historical theories to…

  4. Highly Stretchable and UV Curable Elastomers for Digital Light Processing Based 3D Printing.

    PubMed

    Patel, Dinesh K; Sakhaei, Amir Hosein; Layani, Michael; Zhang, Biao; Ge, Qi; Magdassi, Shlomo

    2017-04-01

    Stretchable UV-curable (SUV) elastomers can be stretched by up to 1100% and are suitable for digital-light-processing (DLP)-based 3D-printing technology. DLP printing of these SUV elastomers enables the direct creation of highly deformable complex 3D hollow structures such as balloons, soft actuators, grippers, and buckyball electronical switches.

  5. 3D Printing Factors Important for the Fabrication of Polyvinylalcohol Filament-Based Tablets.

    PubMed

    Tagami, Tatsuaki; Fukushige, Kaori; Ogawa, Emi; Hayashi, Naomi; Ozeki, Tetsuya

    2017-01-01

    Three-dimensional (3D) printers have been applied in many fields, including engineering and the medical sciences. In the pharmaceutical field, approval of the first 3D-printed tablet by the U.S. Food and Drug Administration in 2015 has attracted interest in the manufacture of tablets and drugs by 3D printing techniques as a means of delivering tailor-made drugs in the future. In current study, polyvinylalcohol (PVA)-based tablets were prepared using a fused-deposition-modeling-type 3D printer and the effect of 3D printing conditions on tablet production was investigated. Curcumin, a model drug/fluorescent marker, was loaded into PVA-filament. We found that several printing parameters, such as the rate of extruding PVA (flow rate), can affect the formability of the resulting PVA-tablets. The 3D-printing temperature is controlled by heating the print nozzle and was shown to affect the color of the tablets and their curcumin content. PVA-based infilled tablets with different densities were prepared by changing the fill density as a printing parameter. Tablets with lower fill density floated in an aqueous solution and their curcumin content tended to dissolve faster. These findings will be useful in developing drug-loaded PVA-based 3D objects and other polymer-based articles prepared using fused-deposition-modeling-type 3D printers.

  6. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  7. 3D modeling of geological anomalies based on segmentation of multiattribute fusion

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-Ning; Song, Cheng-Yun; Li, Zhi-Yong; Cai, Han-Peng; Yao, Xing-Miao; Hu, Guang-Min

    2016-09-01

    3D modeling of geological bodies based on 3D seismic data is used to define the shape and volume of the bodies, which then can be directly applied to reservoir prediction, reserve estimation, and exploration. However, multiattributes are not effectively used in 3D modeling. To solve this problem, we propose a novel method for building of 3D model of geological anomalies based on the segmentation of multiattribute fusion. First, we divide the seismic attributes into edge- and region-based seismic attributes. Then, the segmentation model incorporating the edge- and region-based models is constructed within the levelset-based framework. Finally, the marching cubes algorithm is adopted to extract the zero level set based on the segmentation results and build the 3D model of the geological anomaly. Combining the edge-and region-based attributes to build the segmentation model, we satisfy the independence requirement and avoid the problem of insufficient data of single seismic attribute in capturing the boundaries of geological anomalies. We apply the proposed method to seismic data from the Sichuan Basin in southwestern China and obtain 3D models of caves and channels. Compared with 3D models obtained based on single seismic attributes, the results are better agreement with reality.

  8. Holographic 3D display observable for multiple simultaneous viewers from all horizontal directions by using a time division method.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2014-10-01

    A holographic three-dimensional display system with a viewing angle of 360°, by using a high-speed digital micromirror device (DMD), has been proposed. The wavefront modulated by the DMD enters a rotating mirror tilted vertically downward. The synchronization of the rotating mirror and holograms displayed on the DMD allows for the reconstruction of a wavefront propagating in all horizontal directions. An optical experiment has been demonstrated in order to verify our proposed system. Binocular vision is realized from anywhere within the horizontal plane. Our display system enables simultaneous observation by multiple viewers at an extremely close range.

  9. 3D-shape-based retrieval within the MPEG-7 framework

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-05-01

    Because of the continuous development of multimedia technologies, virtual worlds and augmented reality, 3D contents become a common feature of the today information systems. Hence, standardizing tools for content-based indexing of visual data is a key issue for computer vision related applications. Within the framework of the future MPEG-7 standard, tools for intelligent content-based access to 3D information, targeting applications such as search and retrieval and browsing of 3D model databases, have been recently considered and evaluated. In this paper, we present the 3D Shape Spectrum Descriptor (3D SSD), recently adopted within the current MPEG-7 Committee Draft (CD). The proposed descriptor aims at providing an intrinsic shape description of a 3D mesh and is defined as the distribution of the shape index over the entire mesh. The shape index is a local geometric attribute of a 3D surface, expressed as the angular coordinate of a polar representation of the principal curvature vector. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score.

  10. Electrochemical signal amplification for immunosensor based on 3D interdigitated array electrodes.

    PubMed

    Han, Donghoon; Kim, Yang-Rae; Kang, Chung Mu; Chung, Taek Dong

    2014-06-17

    We devised an electrochemical redox cycling based on three-dimensional interdigitated array (3D IDA) electrodes for signal amplification to enhance the sensitivity of chip-based immunosensors. The 3D IDA consists of two closely spaced parallel indium tin oxide (ITO) electrodes that are positioned not only on the bottom but also the ceiling, facing each other along a microfluidic channel. We investigated the signal intensities from various geometric configurations: Open-2D IDA, Closed-2D IDA, and 3D IDA through electrochemical experiments and finite-element simulations. The 3D IDA among the four different systems exhibited the greatest signal amplification resulting from efficient redox cycling of electroactive species confined in the microchannel so that the faradaic current was augmented by a factor of ∼100. We exploited the enhanced sensitivity of the 3D IDA to build up a chronocoulometric immunosensing platform based on the sandwich enzyme-linked immunosorbent assay (ELISA) protocol. The mouse IgGs on the 3D IDA showed much lower detection limits than on the Closed-2D IDA. The detection limit for mouse IgG measured using the 3D IDA was ∼10 fg/mL, while it was ∼100 fg/mL for the Closed-2D IDA. Moreover, the proposed immunosensor system with the 3D IDA successfully worked for clinical analysis as shown by the sensitive detection of cardiac troponin I in human serum down to 100 fg/mL.

  11. INFORMATION DISPLAY: CONSIDERATIONS FOR DESIGNING COMPUTER-BASED DISPLAY SYSTEMS.

    SciTech Connect

    O'HARA,J.M.; PIRUS,D.; BELTRATCCHI,L.

    2004-09-19

    This paper discussed the presentation of information in computer-based control rooms. Issues associated with the typical displays currently in use are discussed. It is concluded that these displays should be augmented with new displays designed to better meet the information needs of plant personnel and to minimize the need for interface management tasks (the activities personnel have to do to access and organize the information they need). Several approaches to information design are discussed, specifically addressing: (1) monitoring, detection, and situation assessment; (2) routine task performance; and (3) teamwork, crew coordination, collaborative work.

  12. Pep-3D-Search: a method for B-cell epitope prediction based on mimotope analysis

    PubMed Central

    Huang, Yan Xin; Bao, Yong Li; Guo, Shu Yan; Wang, Yan; Zhou, Chun Guang; Li, Yu Xin

    2008-01-01

    Background The prediction of conformational B-cell epitopes is one of the most important goals in immunoinformatics. The solution to this problem, even if approximate, would help in designing experiments to precisely map the residues of interaction between an antigen and an antibody. Consequently, this area of research has received considerable attention from immunologists, structural biologists and computational biologists. Phage-displayed random peptide libraries are powerful tools used to obtain mimotopes that are selected by binding to a given monoclonal antibody (mAb) in a similar way to the native epitope. These mimotopes can be considered as functional epitope mimics. Mimotope analysis based methods can predict not only linear but also conformational epitopes and this has been the focus of much research in recent years. Though some algorithms based on mimotope analysis have been proposed, the precise localization of the interaction site mimicked by the mimotopes is still a challenging task. Results In this study, we propose a method for B-cell epitope prediction based on mimotope analysis called Pep-3D-Search. Given the 3D structure of an antigen and a set of mimotopes (or a motif sequence derived from the set of mimotopes), Pep-3D-Search can be used in two modes: mimotope or motif. To evaluate the performance of Pep-3D-Search to predict epitopes from a set of mimotopes, 10 epitopes defined by crystallography were compared with the predicted results from a Pep-3D-Search: the average Matthews correlation oefficient (MCC), sensitivity and precision were 0.1758, 0.3642 and 0.6948. Compared with other available prediction algorithms, Pep-3D-Search showed comparable MCC, specificity and precision, and could provide novel, rational results. To verify the capability of Pep-3D-Search to align a motif sequence to a 3D structure for predicting epitopes, 6 test cases were used. The predictive performance of Pep-3D-Search was demonstrated to be superior to that of other

  13. Midsagittal plane extraction from brain images based on 3D SIFT

    NASA Astrophysics Data System (ADS)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-01

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  14. Development of a color 3D display visible to plural viewers at the same time without special glasses by using a ray-regenerating method

    NASA Astrophysics Data System (ADS)

    Hamagishi, Goro; Ando, Takahisa; Higashino, Masahiro; Yamashita, Atsuhiro; Mashitani, Ken; Inoue, Masutaka; Kishimoto, Shun-Ichi; Kobayashi, Tetsuro

    2002-05-01

    We have newly developed a few kinds of new auto-stereoscopic 3D displays adopting a ray-regenerating method. The method is invented basically at Osaka University in 1997. We adopted this method with LCD. The display has a very simple construction. It consists of LC panel with a very large number of pixels and many small light sources positioned behind the LC panel. We have examined the following new technologies: 1) Optimum design of the optical system. 2) Suitable construction in order to realize very large number of pixels. 3) Highly bright back-light system with optical fiber array to compensate the low lighting efficiency. The 3D displays having wide viewing area and being visible for plural viewers were realized. But the cross-talk images appeared more than we expected. By changing the construction of this system to reduce the diffusing factors of generated rays, the cross-talk images are reduced dramatically. Within the limitation of the pixel numbers of LCD, it is desirable to increase the pinhole numbers to realize the realistic 3D image. This research formed a link in the chain of the national project by NEDO (New Energy and Industrial Technology Development Organization) in Japan.

  15. Comparison Between Two Generic 3d Building Reconstruction Approaches - Point Cloud Based VS. Image Processing Based

    NASA Astrophysics Data System (ADS)

    Dahlke, D.; Linkiewicz, M.

    2016-06-01

    This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.

  16. The National 3-D Geospatial Information Web-Based Service of Korea

    NASA Astrophysics Data System (ADS)

    Lee, D. T.; Kim, C. W.; Kang, I. G.

    2013-09-01

    3D geospatial information systems should provide efficient spatial analysis tools and able to use all capabilities of the third dimension, and a visualization. Currently, many human activities make steps toward the third dimension like land use, urban and landscape planning, cadastre, environmental monitoring, transportation monitoring, real estate market, military applications, etc. To reflect this trend, the Korean government has been started to construct the 3D geospatial data and service platform. Since the geospatial information was introduced in Korea, the construction of geospatial information (3D geospatial information, digital maps, aerial photographs, ortho photographs, etc.) has been led by the central government. The purpose of this study is to introduce the Korean government-lead 3D geospatial information web-based service for the people who interested in this industry and we would like to introduce not only the present conditions of constructed 3D geospatial data but methodologies and applications of 3D geospatial information. About 15% (about 3,278.74 km2) of the total urban area's 3D geospatial data have been constructed by the national geographic information institute (NGII) of Korea from 2005 to 2012. Especially in six metropolitan cities and Dokdo (island belongs to Korea) on level of detail (LOD) 4 which is photo-realistic textured 3D models including corresponding ortho photographs were constructed in 2012. In this paper, we represented web-based 3D map service system composition and infrastructure and comparison of V-world with Google Earth service will be presented. We also represented Open API based service cases and discussed about the protection of location privacy when we construct 3D indoor building models. In order to prevent an invasion of privacy, we processed image blurring, elimination and camouflage. The importance of public-private cooperation and advanced geospatial information policy is emphasized in Korea. Thus, the progress of

  17. Fourier-based reconstruction for fully 3-D PET: optimization of interpolation parameters.

    PubMed

    Matej, Samuel; Kazantsev, Ivan G

    2006-07-01

    Fourier-based approaches for three-dimensional (3-D) reconstruction are based on the relationship between the 3-D Fourier transform (FT) of the volume and the two-dimensional (2-D) FT of a parallel-ray projection of the volume. The critical step in the Fourier-based methods is the estimation of the samples of the 3-D transform of the image from the samples of the 2-D transforms of the projections on the planes through the origin of Fourier space, and vice versa for forward-projection (reprojection). The Fourier-based approaches have the potential for very fast reconstruction, but their straightforward implementation might lead to unsatisfactory results if careful attention is not paid to interpolation and weighting functions. In our previous work, we have investigated optimal interpolation parameters for the Fourier-based forward and back-projectors for iterative image reconstruction. The optimized interpolation kernels were shown to provide excellent quality comparable to the ideal sinc interpolator. This work presents an optimization of interpolation parameters of the 3-D direct Fourier method with Fourier reprojection (3D-FRP) for fully 3-D positron emission tomography (PET) data with incomplete oblique projections. The reprojection step is needed for the estimation (from an initial image) of the missing portions of the oblique data. In the 3D-FRP implementation, we use the gridding interpolation strategy, combined with proper weighting approaches in the transform and image domains. We have found that while the 3-D reprojection step requires similar optimal interpolation parameters as found in our previous studies on Fourier-based iterative approaches, the optimal interpolation parameters for the main 3D-FRP reconstruction stage are quite different. Our experimental results confirm that for the optimal interpolation parameters a very good image accuracy can be achieved even without any extra spectral oversampling, which is a common practice to decrease errors

  18. Binary 3D image interpolation algorithm based global information and adaptive curves fitting

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng

    2013-08-01

    Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.

  19. 3D Building Models Segmentation Based on K-Means++ Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Mao, B.

    2016-10-01

    3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model) 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid) and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  20. 3D imaging of telomeres and nuclear architecture: An emerging tool of 3D nano-morphology-based diagnosis.

    PubMed

    Knecht, Hans; Mai, Sabine

    2011-04-01

    Patient samples are evaluated by experienced pathologists whose diagnosis guides treating physicians. Pathological diagnoses are complex and often assisted by the application of specific tissue markers. However, cases still exist where pathologists cannot distinguish between closely related entities or determine the aggressiveness of the disease they identify under the microscope. This is due to the absence of reliable markers that define diagnostic subgroups in several cancers. Three-dimensional (3D) imaging of nuclear telomere signatures is emerging as a new tool that may change this situation offering new opportunities to the patients. This article will review current and future avenues in the assessment of diagnostic patient samples.

  1. 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation

    NASA Astrophysics Data System (ADS)

    Hu, Qi; Duan, Jin; Zhai, Di; Wang, LiNing

    2016-10-01

    With the continuous development of industrialization, 3D printing technology steps into individuals' lives gradually, however, the consequential security issue has become the urgent problem which is imminent. This paper proposes the 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation and utilizes authorized key to restrict 3D model printing's permissions. Firstly, algorithms put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform and put the transformed coefficient into Fresnel transformation. Use math model to embed watermark information into it and finally generate 3D digital model with watermarking. This paper adopts VC++.NET and DIRECTX 9.0 SDK for combined developing and testing, and the results show that in fixed affine space, achieve the robustness in translation, revolving and proportion transforms of 3D model and better watermark-invisibility. The security and authorization of 3D model have been protected effectively.

  2. Interactive dynamic three-dimensional scene for the ground-based three-dimensional display

    NASA Astrophysics Data System (ADS)

    Hou, Peining; Sang, Xinzhu; Guo, Nan; Chen, Duo; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Three-dimensional (3D) displays provides valuable tools for many fields, such as scientific experiment, education, information transmission, medical imaging and physical simulation. Ground based 360° 3D display with dynamic and controllable scene can find some special applications, such as design and construction of buildings, aeronautics, military sand table and so on. It can be utilized to evaluate and visualize the dynamic scene of the battlefield, surgical operation and the 3D canvas of art. In order to achieve the ground based 3D display, the public focus plane should be parallel to the camera's imaging planes, and optical axes should be offset to the center of public focus plane in both vertical and horizontal directions. Virtual cameras are used to display 3D dynamic scene with Unity 3D engine. Parameters of virtual cameras for capturing scene are designed and analyzed, and locations of virtual cameras are determined by the observer's eye positions in the observing space world. An interactive dynamic 3D scene for ground based 360° 3D display is demonstrated, which provides high-immersion 3D visualization.

  3. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    NASA Astrophysics Data System (ADS)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  4. Genome3D: a UK collaborative project to annotate genomic sequences with predicted 3D structures based on SCOP and CATH domains

    PubMed Central

    Lewis, Tony E.; Sillitoe, Ian; Andreeva, Antonina; Blundell, Tom L.; Buchan, Daniel W.A.; Chothia, Cyrus; Cuff, Alison; Dana, Jose M.; Filippis, Ioannis; Gough, Julian; Hunter, Sarah; Jones, David T.; Kelley, Lawrence A.; Kleywegt, Gerard J.; Minneci, Federico; Mitchell, Alex; Murzin, Alexey G.; Ochoa-Montaño, Bernardo; Rackham, Owen J. L.; Smith, James; Sternberg, Michael J. E.; Velankar, Sameer; Yeats, Corin; Orengo, Christine

    2013-01-01

    Genome3D, available at http://www.genome3d.eu, is a new collaborative project that integrates UK-based structural resources to provide a unique perspective on sequence–structure–function relationships. Leading structure prediction resources (DomSerf, FUGUE, Gene3D, pDomTHREADER, Phyre and SUPERFAMILY) provide annotations for UniProt sequences to indicate the locations of structural domains (structural annotations) and their 3D structures (structural models). Structural annotations and 3D model predictions are currently available for three model genomes (Homo sapiens, E. coli and baker’s yeast), and the project will extend to other genomes in the near future. As these resources exploit different strategies for predicting structures, the main aim of Genome3D is to enable comparisons between all the resources so that biologists can see where predictions agree and are therefore more trusted. Furthermore, as these methods differ in whether they build their predictions using CATH or SCOP, Genome3D also contains the first official mapping between these two databases. This has identified pairs of similar superfamilies from the two resources at various degrees of consensus (532 bronze pairs, 527 silver pairs and 370 gold pairs). PMID:23203986

  5. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    PubMed Central

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-01-01

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements. PMID:27657066

  6. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.

    PubMed

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-09-19

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  7. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  8. Recent improvements in SPE3D: a VR-based surgery planning environment

    NASA Astrophysics Data System (ADS)

    Witkowski, Marcin; Sitnik, Robert; Verdonschot, Nico

    2014-02-01

    SPE3D is a surgery planning environment developed within TLEMsafe project [1] (funded by the European Commission FP7). It enables the operator to plan a surgical procedure on the customized musculoskeletal (MS) model of the patient's lower limbs, send the modified model to the biomechanical analysis module, and export the scenario's parameters to the surgical navigation system. The personalized patient-specific three-dimensional (3-D) MS model is registered with 3-D MRI dataset of lower limbs and the two modalities may be visualized simultaneously. Apart from main planes, any arbitrary MRI cross-section can be rendered on the 3-D MS model in real time. The interface provides tools for: bone cutting, manipulating and removal, repositioning muscle insertion points, modifying muscle force, removing muscles and placing implants stored in the implant library. SPE3D supports stereoscopic viewing as well as natural inspection/manipulation with use of haptic devices. Alternatively, it may be controlled with use of a standard computer keyboard, mouse and 2D display or a touch screen (e.g. in an operating room). The interface may be utilized in two main fields. Experienced surgeons may use it to simulate their operative plans and prepare input data for a surgical navigation system while student or novice surgeons can use it for training.

  9. Fish body surface data measurement based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Qian, Chen; Yang, Wenkai

    2016-01-01

    To film the moving fish in the glass tank, light will be bent at the interface of air and glass, glass and water. Based on binocular stereo vision and refraction principle, we establish a mathematical model of 3D image correlation to reconstruct the 3D coordinates of samples in the water. Marking speckle in fish surface, a series of real-time speckle images of swimming fish will be obtained by two high-speed cameras, and instantaneous 3D shape, strain, displacement etc. of fish will be reconstructed.

  10. Optimized data processing for an optical 3D sensor based on flying triangulation

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja; Arold, Oliver; Häusler, Gerd; Gurov, Igor; Volkov, Mikhail

    2013-05-01

    We present data processing methods for an optical 3D sensor based on the measurement principle "Flying Triangulation". The principle enables a motion-robust acquisition of the 3D shape of even complex objects: A hand-held sensor is freely guided around the object while real-time feedback of the measurement progress is delivered during the captioning. Although of high precision, the resulting 3D data usually may exhibit some weaknesses: e.g. outliers might be present and the data size might be too large. We describe the measurement principle and the data processing and conclude with measurement results.

  11. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  12. Fast and Precise 3D Fluorophore Localization based on Gradient Fitting

    PubMed Central

    Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang

    2015-01-01

    Astigmatism imaging approach has been widely used to encode the fluorophore’s 3D position in single-particle tracking and super-resolution localization microscopy. Here, we present a new high-speed localization algorithm based on gradient fitting to precisely decode the 3D subpixel position of the fluorophore. This algebraic algorithm determines the center of the fluorescent emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the fluorophore in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising high-speed analyzing method for 3D particle tracking and super-resolution localization microscopy. PMID:26390959

  13. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties.

    PubMed

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B

    2016-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance-sampled at the frame rate of the cameras-as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented.

  14. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties

    PubMed Central

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B.

    2017-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance—sampled at the frame rate of the cameras—as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented. PMID:28220057

  15. Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies.

    PubMed

    Fang, Cheng; Xiao, Zhiyan

    2016-01-01

    Receptor-based 3D-QSAR strategy represents a superior integration of structure-based drug design (SBDD) and three-dimensional quantitative structure-activity relationship (3D-QSAR) analysis. It combines the accurate prediction of ligand poses by the SBDD approach with the good predictability and interpretability of statistical models derived from the 3D-QSAR approach. Extensive efforts have been devoted to the development of receptor-based 3D-QSAR methods and two alternative approaches have been exploited. One associates with computing the binding interactions between a receptor and a ligand to generate structure-based descriptors for QSAR analyses. The other concerns the application of various docking protocols to generate optimal ligand poses so as to provide reliable molecular alignments for the conventional 3D-QSAR operations. This review highlights new concepts and methodologies recently developed in the field of receptorbased 3D-QSAR, and in particular, covers its application in kinase studies.

  16. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  17. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.

  18. The Effect of Interocular Contrast and Ocular Dominance on the Perception of Motion-in-Depth in 3-D Displays

    DTIC Science & Technology

    1981-08-01

    and L4 , multiple-element camera lenses; LP, lens pair; AP, artificial pupil; DS, display screen; Mv, mirror that rotates the visual field vertically...artifact placed in this plane, as shown in Fig. 10. If ST3 is out signals because the total light reflected from the fundus of focus on the retina, its...third servomotor. Lenses L1 , rotation in the second image falls on the axis of mirror L 2, L., and L 4 are actually multiple-element camera lenses

  19. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  20. Graphene Oxide-Based Electrode Inks for 3D-Printed Lithium-Ion Batteries.

    PubMed

    Fu, Kun; Wang, Yibo; Yan, Chaoyi; Yao, Yonggang; Chen, Yanan; Dai, Jiaqi; Lacey, Steven; Wang, Yanbin; Wan, Jiayu; Li, Tian; Wang, Zhengyang; Xu, Yue; Hu, Liangbing

    2016-04-06

    All-component 3D-printed lithium-ion batteries are fabricated by printing graphene-oxide-based composite inks and solid-state gel polymer electrolyte. An entirely 3D-printed full cell features a high electrode mass loading of 18 mg cm(-2) , which is normalized to the overall area of the battery. This all-component printing can be extended to the fabrication of multidimensional/multiscale complex-structures of more energy-storage devices.

  1. Real-Time Display Of 3-D Computed Holograms By Scanning The Image Of An Acousto-Optic Modulator

    NASA Astrophysics Data System (ADS)

    Kollin, Joel S.; Benton, Stephen A.; Jepsen, Mary Lou

    1989-10-01

    The invention of holography has sparked hopes for a three-dimensional electronic imaging systems analogous to television. Unfortunately, the extraordinary spatial detail of ordinary holographic recordings requires unattainable bandwidth and display resolution for three-dimensional moving imagery, effectively preventing their commercial development. However, the essential bandwidth of holographic images can be reduced enough to permit their transmission through fiber optic or coaxial cable, and the required resolution or space-bandwidth product of the display can be obtained by raster scanning the image of a commercially available acousto-optic modulator. No film recording or other photographic intermediate step is necessary as the projected modulator image is viewed directly. The design and construction of a working demonstration of the principles involved is also presented along with a discussion of engineering considerations in the system design. Finally, the theoretical and practical limitations of the system are addressed in the context of extending the system to real-time transmission of moving holograms synthesized from views of real and computer-generated three-dimensional scenes.

  2. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  3. 3D printing of mineral-polymer bone substitutes based on sodium alginate and calcium phosphate.

    PubMed

    Egorov, Aleksey A; Fedotov, Alexander Yu; Mironov, Anton V; Komlev, Vladimir S; Popov, Vladimir K; Zobkov, Yury V

    2016-01-01

    We demonstrate a relatively simple route for three-dimensional (3D) printing of complex-shaped biocompatible structures based on sodium alginate and calcium phosphate (CP) for bone tissue engineering. The fabrication of 3D composite structures was performed through the synthesis of inorganic particles within a biopolymer macromolecular network during 3D printing process. The formation of a new CP phase was studied through X-ray diffraction, Fourier transform infrared spectroscopy and scanning electron microscopy. Both the phase composition and the diameter of the CP particles depend on the concentration of a liquid component (i.e., the "ink"). The 3D printed structures were fabricated and found to have large interconnected porous systems (mean diameter ≈800 μm) and were found to possess compressive strengths from 0.45 to 1.0 MPa. This new approach can be effectively applied for fabrication of biocompatible scaffolds for bone tissue engineering constructions.

  4. 3D point cloud registration based on the assistant camera and Harris-SIFT

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Yu, HongYang

    2016-07-01

    3D(Three-Dimensional) point cloud registration technology is the hot topic in the field of 3D reconstruction, but most of the registration method is not real-time and ineffective. This paper proposes a point cloud registration method of 3D reconstruction based on Harris-SIFT and assistant camera. The assistant camera is used to pinpoint mobile 3D reconstruction device, The feature points of images are detected by using Harris operator, the main orientation for each feature point is calculated, and lastly, the feature point descriptors are generated after rotating the coordinates of the descriptors relative to the feature points' main orientations. Experimental results of demonstrate the effectiveness of the proposed method.

  5. 3D printing of mineral–polymer bone substitutes based on sodium alginate and calcium phosphate

    PubMed Central

    Egorov, Aleksey A; Fedotov, Alexander Yu; Mironov, Anton V; Popov, Vladimir K; Zobkov, Yury V

    2016-01-01

    We demonstrate a relatively simple route for three-dimensional (3D) printing of complex-shaped biocompatible structures based on sodium alginate and calcium phosphate (CP) for bone tissue engineering. The fabrication of 3D composite structures was performed through the synthesis of inorganic particles within a biopolymer macromolecular network during 3D printing process. The formation of a new CP phase was studied through X-ray diffraction, Fourier transform infrared spectroscopy and scanning electron microscopy. Both the phase composition and the diameter of the CP particles depend on the concentration of a liquid component (i.e., the “ink”). The 3D printed structures were fabricated and found to have large interconnected porous systems (mean diameter ≈800 μm) and were found to possess compressive strengths from 0.45 to 1.0 MPa. This new approach can be effectively applied for fabrication of biocompatible scaffolds for bone tissue engineering constructions. PMID:28144529

  6. Correspondenceless 3D-2D registration based on expectation conditional maximization

    NASA Astrophysics Data System (ADS)

    Kang, X.; Taylor, R. H.; Armand, M.; Otake, Y.; Yau, W. P.; Cheung, P. Y. S.; Hu, Y.

    2011-03-01

    3D-2D registration is a fundamental task in image guided interventions. Due to the physics of the X-ray imaging, however, traditional point based methods meet new challenges, where the local point features are indistinguishable, creating difficulties in establishing correspondence between 2D image feature points and 3D model points. In this paper, we propose a novel method to accomplish 3D-2D registration without known correspondences. Given a set of 3D and 2D unmatched points, this is achieved by introducing correspondence probabilities that we model as a mixture model. By casting it into the expectation conditional maximization framework, without establishing one-to-one point correspondences, we can iteratively refine the registration parameters. The method has been tested on 100 real X-ray images. The experiments showed that the proposed method accurately estimated the rotations (< 1°) and in-plane (X-Y plane) translations (< 1 mm).

  7. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  8. Volumetric CT-based segmentation of NSCLC using 3D-Slicer

    PubMed Central

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H.; van Baardwijk, Angela; Fennessy, Fiona M.; Lewis, John H.; De Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J. W. L.

    2013-01-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the “gold standard”. The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81–0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck. PMID:24346241

  9. Robot navigation in cluttered 3-D environments using preference-based fuzzy behaviors.

    PubMed

    Shi, Dongqing; Collins, Emmanuel G; Dunlap, Damion

    2007-12-01

    Autonomous navigation systems for mobile robots have been successfully deployed for a wide range of planar ground-based tasks. However, very few counterparts of previous planar navigation systems were developed for 3-D motion, which is needed for both unmanned aerial and underwater vehicles. A novel fuzzy behavioral scheme for navigating an unmanned helicopter in cluttered 3-D spaces is developed. The 3-D navigation problem is decomposed into several identical 2-D navigation subproblems, each of which is solved by using preference-based fuzzy behaviors. Due to the shortcomings of vector summation during the fusion of the 2-D subproblems, instead of directly outputting steering subdirections by their own defuzzification processes, the intermediate preferences of the subproblems are fused to create a 3-D solution region, representing degrees of preference for the robot movement. A new defuzzification algorithm that steers the robot by finding the centroid of a 3-D convex region of maximum volume in the 3-D solution region is developed. A fuzzy speed-control system is also developed to ensure efficient and safe navigation. Substantial simulations have been carried out to demonstrate that the proposed algorithm can smoothly and effectively guide an unmanned helicopter through unknown and cluttered urban and forest environments.

  10. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  11. Volumetric CT-based segmentation of NSCLC using 3D-Slicer

    NASA Astrophysics Data System (ADS)

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H.; van Baardwijk, Angela; Fennessy, Fiona M.; Lewis, John H.; de Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J. W. L.

    2013-12-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the ``gold standard''. The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81-0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck.

  12. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  13. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  14. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  15. A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Ito, Takaaki; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2014-03-01

    This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.

  16. 3D printed microfluidic circuitry via multijet-based additive manufacturing†

    PubMed Central

    Sochol, R. D.; Sweet, E.; Glick, C. C.; Venkatesh, S.; Avetisyan, A.; Ekman, K. F.; Raulinaitis, A.; Tsai, A.; Wienkers, A.; Korner, K.; Hanson, K.; Long, A.; Hightower, B. J.; Slatton, G.; Burnett, D. C.; Massey, T. L.; Iwai, K.; Lee, L. P.; Pister, K. S. J.; Lin, L.

    2016-01-01

    The miniaturization of integrated fluidic processors affords extensive benefits for chemical and biological fields, yet traditional, monolithic methods of microfabrication present numerous obstacles for the scaling of fluidic operators. Recently, researchers have investigated the use of additive manufacturing or “three-dimensional (3D) printing” technologies – predominantly stereolithography – as a promising alternative for the construction of submillimeter-scale fluidic components. One challenge, however, is that current stereolithography methods lack the ability to simultaneously print sacrificial support materials, which limits the geometric versatility of such approaches. In this work, we investigate the use of multijet modelling (alternatively, polyjet printing) – a layer-by-layer, multi-material inkjetting process – for 3D printing geometrically complex, yet functionally advantageous fluidic components comprised of both static and dynamic physical elements. We examine a fundamental class of 3D printed microfluidic operators, including fluidic capacitors, fluidic diodes, and fluidic transistors. In addition, we evaluate the potential to advance on-chip automation of integrated fluidic systems via geometric modification of component parameters. Theoretical and experimental results for 3D fluidic capacitors demonstrated that transitioning from planar to non-planar diaphragm architectures improved component performance. Flow rectification experiments for 3D printed fluidic diodes revealed a diodicity of 80.6 ± 1.8. Geometry-based gain enhancement for 3D printed fluidic transistors yielded pressure gain of 3.01 ± 0.78. Consistent with additional additive manufacturing methodologies, the use of digitally-transferrable 3D models of fluidic components combined with commercially-available 3D printers could extend the fluidic routing capabilities presented here to researchers in fields beyond the core engineering community. PMID:26725379

  17. 3D printed microfluidic circuitry via multijet-based additive manufacturing.

    PubMed

    Sochol, R D; Sweet, E; Glick, C C; Venkatesh, S; Avetisyan, A; Ekman, K F; Raulinaitis, A; Tsai, A; Wienkers, A; Korner, K; Hanson, K; Long, A; Hightower, B J; Slatton, G; Burnett, D C; Massey, T L; Iwai, K; Lee, L P; Pister, K S J; Lin, L

    2016-02-21

    The miniaturization of integrated fluidic processors affords extensive benefits for chemical and biological fields, yet traditional, monolithic methods of microfabrication present numerous obstacles for the scaling of fluidic operators. Recently, researchers have investigated the use of additive manufacturing or "three-dimensional (3D) printing" technologies - predominantly stereolithography - as a promising alternative for the construction of submillimeter-scale fluidic components. One challenge, however, is that current stereolithography methods lack the ability to simultaneously print sacrificial support materials, which limits the geometric versatility of such approaches. In this work, we investigate the use of multijet modelling (alternatively, polyjet printing) - a layer-by-layer, multi-material inkjetting process - for 3D printing geometrically complex, yet functionally advantageous fluidic components comprised of both static and dynamic physical elements. We examine a fundamental class of 3D printed microfluidic operators, including fluidic capacitors, fluidic diodes, and fluidic transistors. In addition, we evaluate the potential to advance on-chip automation of integrated fluidic systems via geometric modification of component parameters. Theoretical and experimental results for 3D fluidic capacitors demonstrated that transitioning from planar to non-planar diaphragm architectures improved component performance. Flow rectification experiments for 3D printed fluidic diodes revealed a diodicity of 80.6 ± 1.8. Geometry-based gain enhancement for 3D printed fluidic transistors yielded pressure gain of 3.01 ± 0.78. Consistent with additional additive manufacturing methodologies, the use of digitally-transferrable 3D models of fluidic components combined with commercially-available 3D printers could extend the fluidic routing capabilities presented here to researchers in fields beyond the core engineering community.

  18. 3D digitization methods based on laser excitation and active triangulation: a comparison

    NASA Astrophysics Data System (ADS)

    Aubreton, Olivier; Mériaudeau, Fabrice; Truchetet, Frédéric

    2016-04-01

    3D reconstruction of surfaces is an important topic in computer vision and corresponds to a large field of applications: industrial inspection, reverse engineering, object recognition, biometry, archeology… Because of the large varieties of applications, one can find in the literature a lot of approaches which can be classified into two families: passive and active [1]. Certainly because of their reliability, active approaches, using imaging system with an additional controlled light source, seem to be the most commonly used in the industrial field. In this domain, the 3D digitization approach based on active 3D triangulation has had important developments during the last ten years [2] and seems to be mature today if considering the important number of systems proposed by manufacturers. Unfortunately, the performances of active 3D scanners depend on the optical properties of the surface to digitize. As an example, on Fig 1.a, a 3D shape with a diffuse surface has been digitized with Comet V scanner (Steinbichler). The 3D reconstruction is presented on Fig 1.b. The same experiment was carried out on a similar object (same shape) but presenting a specular surface (Fig 1.c and Fig 1.d) ; it can clearly be observed, that the specularity influences of the performance of the digitization.

  19. Alignment, segmentation and 3-D reconstruction of serial sections based on automated algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Weiguo; Tang, Shaojie; Xu, Qiong; Lian, Qin; Wang, Jin; Li, Dichen

    2012-12-01

    A well-defined three-dimensional (3-D) reconstruction of bone-cartilage transitional structures is crucial for the osteochondral restoration. This paper presents an accurate, computationally efficient and fully-automated algorithm for the alignment and segmentation of two-dimensional (2-D) serial to construct the 3-D model of bone-cartilage transitional structures. Entire system includes the following five components: (1) image harvest, (2) image registration, (3) image segmentation, (4) 3-D reconstruction and visualization, and (5) evaluation. A computer program was developed in the environment of Matlab for the automatic alignment and segmentation of serial sections. Automatic alignment algorithm based on the position's cross-correlation of the anatomical characteristic feature points of two sequential sections. A method combining an automatic segmentation and an image threshold processing was applied to capture the regions and structures of interest. SEM micrograph and 3-D model reconstructed directly in digital microscope were used to evaluate the reliability and accuracy of this strategy. The morphology of 3-D model constructed by serial sections is consistent with the results of SEM micrograph and 3-D model of digital microscope.

  20. 3D finite element analysis of porous Ti-based alloy prostheses.

    PubMed

    Mircheski, Ile; Gradišar, Marko

    2016-11-01

    In this paper, novel designs of porous acetabular cups are created and tested with 3D finite element analysis (FEA). The aim is to develop a porous acetabular cup with low effective radial stiffness of the structure, which will be near to the architectural and mechanical behavior of the natural bone. For the realization of this research, a 3D-scanner technology was used for obtaining a 3D-CAD model of the pelvis bone, a 3D-CAD software for creating a porous acetabular cup, and a 3D-FEA software for virtual testing of a novel design of the porous acetabular cup. The results obtained from this research reveal that a porous acetabular cup from Ti-based alloys with 60 ± 5% porosity has the mechanical behavior and effective radial stiffness (Young's modulus in radial direction) that meet and exceed the required properties of the natural bone. The virtual testing with 3D-FEA of a novel design with porous structure during the very early stage of the design and the development of orthopedic implants, enables obtaining a new or improved biomedical implant for a relatively short time and reduced price.

  1. Superabsorbent 3D Scaffold Based on Electrospun Nanofibers for Cartilage Tissue Engineering.

    PubMed

    Chen, Weiming; Chen, Shuai; Morsi, Yosry; El-Hamshary, Hany; El-Newhy, Mohamed; Fan, Cunyi; Mo, Xiumei

    2016-09-21

    Electrospun nanofibers have been used for various biomedical applications. However, electrospinning commonly produces two-dimensional (2D) membranes, which limits the application of nanofibers for the 3D tissue engineering scaffold. In the present study, a porous 3D scaffold (3DS-1) based on electrospun gelatin/PLA nanofibers has been prepared for cartilage tissue regeneration. To further improve the repairing effect of cartilage, a modified scaffold (3DS-2) cross-linked with hyaluronic acid (HA) was also successfully fabricated. The nanofibrous structure, water absorption, and compressive mechanical properties of 3D scaffold were studied. Chondrocytes were cultured on 3D scaffold, and their viability and morphology were examined. 3D scaffolds were also subjected to an in vivo cartilage regeneration study on rabbits using an articular cartilage injury model. The results indicated that 3DS-1 and 3DS-2 exhibited superabsorbent property and excellent cytocompatibility. Both these scaffolds present elastic property in the wet state. An in vivo study showed that 3DS-2 could enhance the repair of cartilage. The present 3D nanofibrous scaffold (3DS-2) would be promising for cartilage tissue engineering application.

  2. A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision

    NASA Astrophysics Data System (ADS)

    Tsai, Yuan-Yu

    2016-03-01

    Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.

  3. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  4. 3D IC TSV-Based Technology: Stress Assessment For Chip Performance

    NASA Astrophysics Data System (ADS)

    Sukharev, Valeriy; Kteyan, Armen; Khachatryan, Nikolay; Hovsepyan, Henrik; Torres, Juan Andres; Choy, Jun-Ho; Markosian, Ara

    2010-11-01

    Potential challenges with managing mechanical stress distributions and the consequent effects on device performance for advanced 3D through-silicon-via (TSV) based technologies are outlined. A set of physics-based compact models of a multi-scale simulation flow for assessment of the mechanical stress across the device layers in the silicon chips stacked and packaged with the 3D TSV technology is proposed. A calibration technique based on fitting to measured transistor electrical characteristics of a custom designed test-chip is proposed.

  5. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  6. Graph-based segmentation for RGB-D data using 3-D geometry enhanced superpixels.

    PubMed

    Yang, Jingyu; Gan, Ziqiao; Li, Kun; Hou, Chunping

    2015-05-01

    With the advances of depth sensing technologies, color image plus depth information (referred to as RGB-D data hereafter) is more and more popular for comprehensive description of 3-D scenes. This paper proposes a two-stage segmentation method for RGB-D data: 1) oversegmentation by 3-D geometry enhanced superpixels and 2) graph-based merging with label cost from superpixels. In the oversegmentation stage, 3-D geometrical information is reconstructed from the depth map. Then, a K-means-like clustering method is applied to the RGB-D data for oversegmentation using an 8-D distance metric constructed from both color and 3-D geometrical information. In the merging stage, treating each superpixel as a node, a graph-based model is set up to relabel the superpixels into semantically-coherent segments. In the graph-based model, RGB-D proximity, texture similarity, and boundary continuity are incorporated into the smoothness term to exploit the correlations of neighboring superpixels. To obtain a compact labeling, the label term is designed to penalize labels linking to similar superpixels that likely belong to the same object. Both the proposed 3-D geometry enhanced superpixel clustering method and the graph-based merging method from superpixels are evaluated by qualitative and quantitative results. By the fusion of color and depth information, the proposed method achieves superior segmentation performance over several state-of-the-art algorithms.

  7. Displaying Geographically-Based Domestic Statistics

    NASA Technical Reports Server (NTRS)

    Quann, J.; Dalton, J.; Banks, M.; Helfer, D.; Szczur, M.; Winkert, G.; Billingsley, J.; Borgstede, R.; Chen, J.; Chen, L.; Fuh, J.; Cyprych, E.

    1982-01-01

    Decision Information Display System (DIDS) is rapid-response information-retrieval and color-graphics display system. DIDS transforms tables of geographically-based domestic statistics (such as population or unemployment by county, energy usage by county, or air-quality figures) into high-resolution, color-coded maps on television display screen.

  8. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  9. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    NASA Astrophysics Data System (ADS)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  10. Shape-based 3D vascular tree extraction for perforator flaps

    NASA Astrophysics Data System (ADS)

    Wen, Quan; Gao, Jean

    2005-04-01

    Perforator flaps have been increasingly used in the past few years for trauma and reconstructive surgical cases. With the thinned perforated flaps, greater survivability and decrease in donor site morbidity have been reported. Knowledge of the 3D vascular tree will provide insight information about the dissection region, vascular territory, and fascia levels. This paper presents a scheme of shape-based 3D vascular tree reconstruction of perforator flaps for plastic surgery planning, which overcomes the deficiencies of current existing shape-based interpolation methods by applying rotation and 3D repairing. The scheme has the ability to restore the broken parts of the perforator vascular tree by using a probability-based adaptive connection point search (PACPS) algorithm with minimum human intervention. The experimental results evaluated by both synthetic and 39 harvested cadaver perforator flaps show the promise and potential of proposed scheme for plastic surgery planning.

  11. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  12. 3D Game-Based Learning System for Improving Learning Achievement in Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Su,Chung-Ho; Cheng, Ching-Hsue

    2013-01-01

    The advancement of game-based learning has encouraged many related studies, such that students could better learn curriculum by 3-dimension virtual reality. To enhance software engineering learning, this paper develops a 3D game-based learning system to assist teaching and assess the students' motivation, satisfaction and learning achievement. A…

  13. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  14. Complete calibration of a phase-based 3D imaging system based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Meng, Shasha; Ma, Haiyan; Zhang, Zonghua; Guo, Tong; Zhang, Sixiang; Hu, Xiaotang

    2011-11-01

    Phase calculation-based 3D imaging systems have been widely studied because of the advantages of non-contact operation, full-field, fast acquisition and automatic data processing. A vital step is calibration, which builds up the relationship between phase map and range image. The existing calibration methods are complicated because of using a precise translating stage or a 3D gauge block. Recently, we presented a simple method to convert phase into depth data by using a polynomial function and a plate having discrete markers on the surface with known distance in between. However, the initial position of all the markers needs to be determined manually and the X, Y coordinates are not calibrated. This paper presents a complete calibration method of phase calculation-based 3D imaging systems by using a plate having discrete markers on the surface with known distance in between. The absolute phase of each pixel can be calculated by projecting fringe pattern onto the plate. Each marker position can be determined by an automatic extraction algorithm, so the relative depth of each pixel to a chosen reference plane can be obtained. Therefore, coefficient set of the polynomial function for each pixel are determined by using the obtained absolute phase and depth data. In the meanwhile, pixel positions and the X, Y coordinates can be established by the parameters of the CCD camera. Experimental results and performance evaluation show that the proposed calibration method can easily build up the relationship between absolute phase map and range image in a simple way.

  15. 3D printing of weft knitted textile based structures by selective laser sintering of nylon powder

    NASA Astrophysics Data System (ADS)

    Beecroft, M.

    2016-07-01

    3D printing is a form of additive manufacturing whereby the building up of layers of material creates objects. The selective laser sintering process (SLS) uses a laser beam to sinter powdered material to create objects. This paper builds upon previous research into 3D printed textile based material exploring the use of SLS using nylon powder to create flexible weft knitted structures. The results show the potential to print flexible textile based structures that exhibit the properties of traditional knitted textile structures along with the mechanical properties of the material used, whilst describing the challenges regarding fineness of printing resolution. The conclusion highlights the potential future development and application of such pieces.

  16. Mechanosensing of cells in 3D gel matrices based on natural and synthetic materials.

    PubMed

    Shan, Jieling; Chi, Qingjia; Wang, Hongbing; Huang, Qiping; Yang, Li; Yu, Guanglei; Zou, Xiaobing

    2014-11-01

    Cells in vivo typically are found in 3D matrices, the mechanical stiffness of which is important to the cell and tissue-scale biological processes. Although it is well characterized that as to how cells sense matrix stiffness in 2D substrates, the scenario in 3D matrices needs to be explored. Thus, materials that can mimic native 3D environments and possess wide, physiologically relevant elasticity are highly desirable. Natural polymer-based materials and synthetic hydrogels could provide an better 3D platforms to investigate the mechano-response of cells with stiffness comparable to their native environments. However, the limited stiffness range together with interdependence of matrix stiffness and adhesive ligand density are inherent in many kinds of materials, and hinder efforts to demonstrate the true effects contributed by matrix stiffness. These problems have been addressed by the recently emerging exquisitely designed materials based on native matrix components, designer matrices, and synthetic polymers. In this review, a variety of materials with a wide stiffness range that mimic the mechanical environment of native 3D matrices and the independent affection of stiffness for cellular behavior and tissue-level processes are discussed.

  17. Laser nanostructuring 3-D bioconstruction based on carbon nanotubes in a water matrix of albumin

    NASA Astrophysics Data System (ADS)

    Gerasimenko, Alexander Y.; Ichkitidze, Levan P.; Podgaetsky, Vitaly M.; Savelyev, Mikhail S.; Selishchev, Sergey V.

    2016-04-01

    3-D bioconstructions were created using the evaporation method of the water-albumin solution with carbon nanotubes (CNTs) by the continuous and pulsed femtosecond laser radiation. It is determined that the volume structure of the samples created by the femtosecond radiation has more cavities than the one created by the continuous radiation. The average diameter for multi-walled carbon nanotubes (MWCNTs) samples was almost two times higher (35-40 nm) than for single-walled carbon nanotubes (SWCNTs) samples (20-30 nm). The most homogenous 3-D bioconstruction was formed from MWCNTs by the continuous laser radiation. The hardness of such samples totaled up to 370 MPa at the nanoscale. High strength properties and the resistance of the 3-D bioconstructions produced by the laser irradiation depend on the volume nanotubes scaffold forming inside them. The scaffold was formed by the electric field of the directed laser irradiation. The covalent bond energy between the nanotube carbon molecule and the oxygen of the bovine serum albumin aminoacid residue amounts 580 kJ/mol. The 3-D bioconstructions based on MWCNTs and SWCNTs becomes overgrown with the cells (fibroblasts) over the course of 72 hours. The samples based on the both types of CNTs are not toxic for the cells and don't change its normal composition and structure. Thus the 3-D bioconstructions that are nanostructured by the pulsed and continuous laser radiation can be applied as implant materials for the recovery of the connecting tissues of the living body.

  18. High Accuracy Acquisition of 3-D Flight Trajectory of Individual Insect Based on Phase Measurement.

    PubMed

    Hu, Cheng; Deng, Yunkai; Wang, Rui; Liu, Changjiang; Long, Teng

    2016-12-17

    Accurate acquisition of 3-D flight trajectory of individual insect could be of benefit to the research of insect migration behaviors and the development of migratory entomology. This paper proposes a novel method to acquire 3-D flight trajectory of individual insect. First, based on the high range resolution synthesizing and the Doppler coherent processing, insects can be detected effectively, and the range resolution and velocity resolution are combined together to discriminate insects. Then, high accuracy range measurement with the carrier phase is proposed. The range measurement accuracy can reach millimeter level and benefits the acquisition of 3-D trajectory information significantly. Finally, based on the multi-baselines interferometry theory, the azimuth and elevation angles can be obtained with high accuracy. Simulation results prove that the retrieval accuracy of a simulated target's 3-D coordinates can reach centimeter level. Experiments utilizing S-band radar in an anechoic chamber were taken and results showed that the insects' flight behaviors and 3-D coordinates' variation matched the practical cases well. In conclusion, both the simulated and experimental datasets validate the feasibility of the proposed method, which could be a novel measurement way of monitoring flight trajectory of aerial free-fly insects.

  19. High Accuracy Acquisition of 3-D Flight Trajectory of Individual Insect Based on Phase Measurement

    PubMed Central

    Hu, Cheng; Deng, Yunkai; Wang, Rui; Liu, Changjiang; Long, Teng

    2016-01-01

    Accurate acquisition of 3-D flight trajectory of individual insect could be of benefit to the research of insect migration behaviors and the development of migratory entomology. This paper proposes a novel method to acquire 3-D flight trajectory of individual insect. First, based on the high range resolution synthesizing and the Doppler coherent processing, insects can be detected effectively, and the range resolution and velocity resolution are combined together to discriminate insects. Then, high accuracy range measurement with the carrier phase is proposed. The range measurement accuracy can reach millimeter level and benefits the acquisition of 3-D trajectory information significantly. Finally, based on the multi-baselines interferometry theory, the azimuth and elevation angles can be obtained with high accuracy. Simulation results prove that the retrieval accuracy of a simulated target’s 3-D coordinates can reach centimeter level. Experiments utilizing S-band radar in an anechoic chamber were taken and results showed that the insects’ flight behaviors and 3-D coordinates’ variation matched the practical cases well. In conclusion, both the simulated and experimental datasets validate the feasibility of the proposed method, which could be a novel measurement way of monitoring flight trajectory of aerial free-fly insects. PMID:27999317

  20. Relevance of PEG in PLA-based blends for tissue engineering 3D-printed scaffolds.

    PubMed

    Serra, Tiziano; Ortiz-Hernandez, Monica; Engel, Elisabeth; Planell, Josep A; Navarro, Melba

    2014-05-01

    Achieving high quality 3D-printed structures requires establishing the right printing conditions. Finding processing conditions that satisfy both the fabrication process and the final required scaffold properties is crucial. This work stresses the importance of studying the outcome of the plasticizing effect of PEG on PLA-based blends used for the fabrication of 3D-direct-printed scaffolds for tissue engineering applications. For this, PLA/PEG blends with 5, 10 and 20% (w/w) of PEG and PLA/PEG/bioactive CaP glass composites were processed in the form of 3D rapid prototyping scaffolds. Surface analysis and differential scanning calorimetry revealed a rearrangement of polymer chains and a topography, wettability and elastic modulus increase of the studied surfaces as PEG was incorporated. Moreover, addition of 10 and 20% PEG led to non-uniform 3D structures with lower mechanical properties. In vitro degradation studies showed that the inclusion of PEG significantly accelerated the degradation rate of the material. Results indicated that the presence of PEG not only improves PLA processing but also leads to relevant surface, geometrical and structural changes including modulation of the degradation rate of PLA-based 3D printed scaffolds.

  1. 3D texture-based classification applied on brain white matter lesions on MR images

    NASA Astrophysics Data System (ADS)

    Leite, Mariana; Gobbi, David; Salluzi, Marina; Frayne, Richard; Lotufo, Roberto; Rittner, Letícia

    2016-03-01

    Lesions in the brain white matter are among the most frequently observed incidental findings on MR images. This paper presents a 3D texture-based classification to distinguish normal appearing white matter from white matter containing lesions, and compares it with the 2D approach. Texture analysis were based on 55 texture attributes extracted from gray-level histogram, gray-level co-occurrence matrix, run-length matrix and gradient. The results show that the 3D approach achieves an accuracy rate of 99.28%, against 97.41% of the 2D approach by using a support vector machine classifier. Furthermore, the most discriminating texture attributes on both 2D and 3D cases were obtained from the image histogram and co-occurrence matrix.

  2. Computer-Assisted Hepatocellular Carcinoma Ablation Planning Based on 3-D Ultrasound Imaging.

    PubMed

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Guan, Peishan; Li, Liu-Jun; Zheng, Rongqin

    2016-08-01

    To evaluate computer-assisted hepatocellular carcinoma (HCC) ablation planning based on 3-D ultrasound, 3-D ultrasound images of 60 HCC lesions from 58 patients were obtained and transferred to a research toolkit. Compared with virtual manual ablation planning (MAP), virtual computer-assisted ablation planning (CAP) consumed less time and needle insertion numbers and exhibited a higher rate of complete tumor coverage and lower rate of critical structure injury. In MAP, junior operators used less time, but had more critical structure injury than senior operators. For large lesions, CAP performed better than MAP. For lesions near critical structures, CAP resulted in better outcomes than MAP. Compared with MAP, CAP based on 3-D ultrasound imaging was more effective and achieved a higher rate of complete tumor coverage and a lower rate of critical structure injury; it is especially useful for junior operators and with large lesions, and lesions near critical structures.

  3. Gelatin-based 3D conduits for transdifferentiation of mesenchymal stem cells into Schwann cell-like phenotypes.

    PubMed

    Uz, Metin; Büyüköz, Melda; Sharma, Anup D; Sakaguchi, Donald S; Altinkaya, Sacide Alsoy; Mallapragada, Surya K

    2017-02-16

    In this study, gelatin-based 3D conduits with three different microstructures (nanofibrous, macroporous and ladder-like) were fabricated for the first time via combined molding and thermally induced phase separation (TIPS) technique for peripheral nerve regeneration. The effects of conduit microstructure and mechanical properties on the transdifferentiation of bone marrow-derived mesenchymal stem cells (MSCs) into Schwann cell (SC) like phenotypes were examined to help facilitate neuroregeneration and understand material-cell interfaces. Results indicated that 3D macroporous and ladder-like structures enhanced MSC attachment, proliferation and spreading, creating interconnected cellular networks with large numbers of viable cells compared to nanofibrous and 2D-tissue culture plate counterparts. 3D-ladder-like conduit structure with complex modulus of ∼0.4×10(6)Pa and pore size of ∼150μm provided the most favorable microenvironment for MSC transdifferentiation leading to ∼85% immunolabeling of all SC markers. On the other hand, the macroporous conduits with complex modulus of ∼4×10(6)Pa and pore size of ∼100μm showed slightly lower (∼65% for p75, ∼75% for S100 and ∼85% for S100β markers) immunolabeling. Transdifferentiated MSCs within 3D-ladder-like conduits secreted significant amounts (∼2.5pg/mL NGF and ∼0.7pg/mL GDNF per cell) of neurotrophic factors, while MSCs in macroporous conduits released slightly lower (∼1.5pg/mL NGF and 0.7pg/mL GDNF per cell) levels. PC12 cells displayed enhanced neurite outgrowth in media conditioned by conduits with transdifferentiated MSCs. Overall, conduits with macroporous and ladder-like 3D structures are promising platforms in transdifferentiation of MSCs for neuroregeneration and should be further tested in vivo.

  4. 3D SERS imaging based on chemically-synthesized highly-symmetric nanoporous silver microparticles

    NASA Astrophysics Data System (ADS)

    Ozaki, Yukihiro; Vantasin, Sanpon; Ji, Wei; Tanaka, Yoshito; Kitahama, Yasutaka; Wongrawee, Kanet; Ekgasit, Sanong

    2016-09-01

    This study presents the synthesis, SERS properties in three dimensions, and an application of 3D symmetric nanoporous silver microparticles. The particles are synthesized by purely chemical process: controlled precipitation of AgCl to acquire highly symmetric AgCl microparticle, followed by in-place to convert AgCl into nanoporous silver. The particles display highly predictable SERS enhancement pattern in three dimensions, which resembles particle shape and retains symmetry. The highly regular enhancement pattern allows an application in the study of inhomogeneity in two-layer polymer system, by improving spatial resolution in Z axis.

  5. Gradient-based 3D-2D registration of cerebral angiograms

    NASA Astrophysics Data System (ADS)

    Mitrović, Uroš; Markelj, Primož; Likar, Boštjan; Miloševič, Zoran; Pernuš, Franjo

    2011-03-01

    Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system into the brain and into the aneurysm or AVM. Intra-interventional navigation utilizes digital subtraction angiography (DSA) to visualize vascular structures and X-ray fluoroscopy to localize the endovascular components. Due to the two-dimensional (2D) nature of the intra-interventional images, navigation through a complex three-dimensional (3D) structure is a demanding task. Registration of pre-interventional MRA, CTA, or 3D-DSA images and intra-interventional 2D DSA images can greatly enhance visualization and navigation. As a consequence of better navigation in 3D, the amount of required contrast medium and absorbed dose could be significantly reduced. In the past, development and evaluation of 3D-2D registration methods received considerable attention. Several validation image databases and evaluation criteria were created and made publicly available in the past. However, applications of 3D-2D registration methods to cerebral angiograms and their validation are rather scarce. In this paper, the 3D-2D robust gradient reconstruction-based (RGRB) registration algorithm is applied to CTA and DSA images and analyzed. For the evaluation purposes five image datasets, each comprised of a 3D CTA and several 2D DSA-like digitally reconstructed radiographs (DRRs) generated from the CTA, with accurate gold standard registrations were created. A total of 4000 registrations on these five datasets resulted in mean mTRE values between 0.07 and 0.59 mm, capture ranges between 6 and 11 mm and success rates between 61 and 88% using a failure threshold of 2 mm.

  6. Lithologic identification & mapping test based on 3D inversion of magnetic and gravity

    NASA Astrophysics Data System (ADS)

    Yan, Jiayong; Lv, Qingtian; Qi, Guang; Zhao, Jinhua; Zhang, Yongqian

    2016-04-01

    Though lithologic identification & mapping to achieve ore concentration district transparent within 5km depth is the main way to realize deep fine structures study, to explore deep mineral resources and to reveal metallogenic regularity of large-scale ore district . Owing to the wide covered area, high sampling density and mature three-dimensional inversion algorithm of gravity and magnetic data, so gravity and magnetic inversion become the most likely way to achieve three-dimensional lithologic mapping at the present stage. In this paper, we take Lu-zong(Lujiang county to Zongyang county in Anhui province ,east China) ore district as a case, we proposed lithologic mapping flow based 3D inversion of gravity magnetic and then carry out the lithologic mapping test. Lithologic identification & mapping flow is as follows: 1. Analysis relations between lithology and density and magnetic susceptibility by cross plot. 2.Extracting appropriate residual anomalies from high-precision Bourger gravity and aeromagnetic. 3.Use same mesh, do 3D magnetic and gravity inversion respectively under prior information constrained, and then invert susceptibility and density 3D model. 4. According setp1, construct logical topology operations between density 3D model and susceptibility. 5.Use the logical operations, identify lithogies cell by cell in 3D mesh, and then get 3D lithological model. According this flow, we obtained three-dimensional distribution of five main type lithologies in the Lu-Zong ore district within 5km depth. The result of lithologic mapping not only showed that the shallow characteristics and surface geological mapping are basically Coincide,more importantly ,it reveals the deeper lithologic changes.The lithlogical model make up the insufficient of surface geological mapping. The lithologic mapping test results in Lu-Zong ore concentration district showed that lithological mapping using 3D inversion of gravity and magnetic is a effective method to reveal the

  7. 2-D Versus 3-D Cross-Correlation-Based Radial and Circumferential Strain Estimation Using Multiplane 2-D Ultrafast Ultrasound in a 3-D Atherosclerotic Carotid Artery Model.

    PubMed

    Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L

    2016-10-01

    Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.

  8. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    PubMed

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency.

  9. Electro-bending characterization of adaptive 3D fiber reinforced plastics based on shape memory alloys

    NASA Astrophysics Data System (ADS)

    Ashir, Moniruddoza; Hahn, Lars; Kluge, Axel; Nocke, Andreas; Cherif, Chokri

    2016-03-01

    The industrial importance of fiber reinforced plastics (FRPs) is growing steadily in recent years, which are mostly used in different niche products, has been growing steadily in recent years. The integration of sensors and actuators in FRP is potentially valuable for creating innovative applications and therefore the market acceptance of adaptive FRP is increasing. In particular, in the field of highly stressed FRP, structural integrated systems for continuous component parts monitoring play an important role. This presented work focuses on the electro-mechanical characterization of adaptive three-dimensional (3D)FRP with integrated textile-based actuators. Here, the friction spun hybrid yarn, consisting of shape memory alloy (SMA) in wire form as core, serves as an actuator. Because of the shape memory effect, the SMA-hybrid yarn returns to its original shape upon heating that also causes the deformation of adaptive 3D FRP. In order to investigate the influences of the deformation behavior of the adaptive 3D FRP, investigations in this research are varied according to the structural parameters such as radius of curvature of the adaptive 3D FRP, fabric types and number of layers of the fabric in the composite. Results show that reproducible deformations can be realized with adaptive 3D FRP and that structural parameters have a significant impact on the deformation capability.

  10. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  11. 3D watershed-based segmentation of internal structures within MR brain images

    NASA Astrophysics Data System (ADS)

    Bueno, Gloria; Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul

    2000-06-01

    In this paper an image-based method founded on mathematical morphology is presented in order to facilitate the segmentation of cerebral structures on 3D magnetic resonance images (MRIs). The segmentation is described as an immersion simulation, applied to the modified gradient image, modeled by a generated 3D region adjacency graph (RAG). The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions are identified in order to attribute an influence zone only to relevant minima of the image. This stage uses contrasted regions from morphological reconstruction and labeled flat regions constrained by the RAG. The goal of the decision stage is to precisely locate the contours of regions detected by the marker extraction. This decision is performed by a 3D extension of the watershed transform. Upon completion of the segmentation, the outcome of the preceding process is presented to the user for manual selection of the structures of interest (SOI). Results of this approach are described and illustrated with examples of segmented 3D MRIs of the human head.

  12. 3-D world modeling based on combinatorial geometry for autonomous robot navigation

    SciTech Connect

    Goldstein, M.; Pin, F.G.; de Saussure, G.; Weisbin, C.R.

    1987-01-01

    In applications of robotics to surveillance and mapping at nuclear facilities, the scene to be described is fundamentally three-dimensional. Usually, only partial information concerning the 3-D environment is known a-priori. Using an autonomous robot, this information may be updated using range data to provide an accurate model of the environment. Range data quantify the distances from the sensor focal plane to the object surface. In other words, the 3-D coordinates of discrete points on the object surface are known. The approach proposed herein for 3-D world modeling is based on the Combinatorial Geometry (C.G.) Method