Science.gov

Sample records for 3-d image display

  1. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  2. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  3. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  4. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  5. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  6. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  7. Integral imaging based 3D display of holographic data.

    PubMed

    Yöntem, Ali Özgür; Onural, Levent

    2012-10-22

    We propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on an integral imaging based display setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated data or digitally acquired optical data. The method shows the connection between a hologram (diffraction pattern) and an elemental image set of the same 3D object. We showed three examples, one of which is the digitally captured optical diffraction tomography data of an epithelium cell. We obtained optical reconstructions with our integral imaging display setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement.

  8. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen.

  9. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  10. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  11. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  12. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  13. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  14. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  15. Evaluation of stereoscopic 3D displays for image analysis tasks

    NASA Astrophysics Data System (ADS)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  16. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  17. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed.

  18. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  19. Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher.

    PubMed

    Wang, Qiong-Hua; Ji, Chao-Chao; Li, Lei; Deng, Huan

    2016-01-11

    In this paper, a dual-view integral imaging three-dimensional (3D) display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array is proposed. Two elemental image arrays for two different 3D images are presented by the display panel alternately, and the polarization switcher controls the polarization direction of the light rays synchronously. The two elemental image arrays are modulated by their corresponding and neighboring micro-lenses of the micro-lens array, and reconstruct two different 3D images in viewing zones 1 and 2, respectively. A prototype of the dual-view II 3D display is developed, and it has good performances.

  20. Air-touch interaction system for integral imaging 3D display

    NASA Astrophysics Data System (ADS)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  1. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  2. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  3. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  4. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  5. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  6. 3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC).

    PubMed

    Navarro, H; Martínez-Cuenca, R; Saavedra, G; Martínez-Corral, M; Javidi, B

    2010-12-06

    Previously, we reported a digital technique for formation of real, non-distorted, orthoscopic integral images by direct pickup. However the technique was constrained to the case of symmetric image capture and display systems. Here, we report a more general algorithm which allows the pseudoscopic to orthoscopic transformation with full control over the display parameters so that one can generate a set of synthetic elemental images that suits the characteristics of the Integral-Imaging monitor and permits control over the depth and size of the reconstructed 3D scene.

  7. A 3D integral imaging optical see-through head-mounted display.

    PubMed

    Hua, Hong; Javidi, Bahram

    2014-06-02

    An optical see-through head-mounted display (OST-HMD), which enables optical superposition of digital information onto the direct view of the physical world and maintains see-through vision to the real world, is a vital component in an augmented reality (AR) system. A key limitation of the state-of-the-art OST-HMD technology is the well-known accommodation-convergence mismatch problem caused by the fact that the image source in most of the existing AR displays is a 2D flat surface located at a fixed distance from the eye. In this paper, we present an innovative approach to OST-HMD designs by combining the recent advancement of freeform optical technology and microscopic integral imaging (micro-InI) method. A micro-InI unit creates a 3D image source for HMD viewing optics, instead of a typical 2D display surface, by reconstructing a miniature 3D scene from a large number of perspective images of the scene. By taking advantage of the emerging freeform optical technology, our approach will result in compact, lightweight, goggle-style AR display that is potentially less vulnerable to the accommodation-convergence discrepancy problem and visual fatigue. A proof-of-concept prototype system is demonstrated, which offers a goggle-like compact form factor, non-obstructive see-through field of view, and true 3D virtual display.

  8. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  9. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  10. Membrane-mirror-based display for viewing 2D and 3D images

    NASA Astrophysics Data System (ADS)

    McKay, Stuart; Mason, Steven; Mair, Leslie S.; Waddell, Peter; Fraser, Simon M.

    1999-05-01

    Stretchable Membrane Mirrors (SMMs) have been developed at the University of Strathclyde as a cheap, lightweight and variable focal length alternative to conventional fixed- curvature glass based optics. A SMM uses a thin sheet of aluminized polyester film which is stretched over a specially shaped frame, forming an airtight cavity behind the membrane. Removal of air from that cavity causes the resulting air pressure difference to force the membrane back into a concave shape. Controlling the pressure difference acting over the membrane now controls the curvature or f/No. of the mirror. Mirrors from 0.15-m to 1.2-m in diameter have been constructed at the University of Strathclyde. The use of lenses and mirrors to project real images in space is perhaps one of the simplest forms of 3D display. When using conventional optics however, there are severe financial restrictions on what size of image forming element may be used, hence the appeal of a SMM. The mirrors have been used both as image forming elements and directional screens in volumetric, stereoscopic and large format simulator displays. It was found that the use of these specular reflecting surfaces greatly enhances the perceived image quality of the resulting magnified display.

  11. 3D display and image processing system for metal bellows welding

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Son, Jung-Young

    2010-04-01

    Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.

  12. Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images.

    PubMed

    Otsuka, Rieko; Hoshino, Takeshi; Horry, Youichi

    2006-01-01

    Three-dimensional displays are drawing attention as next-generation devices. Some techniques which can reproduce three-dimensional images prepared in advance have already been developed. However, technology for the transmission of 3D moving pictures in real-time is yet to be achieved. In this paper, we present a novel method for 360-degrees viewable 3D displays and the Transpost system in which we implement the method. The basic concept of our system is to project multiple images of the object, taken from different angles, onto a spinning screen. The key to the method is projection of the images onto a directionally reflective screen with a limited viewing angle. The images are reconstructed to give the viewer a three-dimensional image of the object displayed on the screen. The display system can present images of computer-graphics pictures, live pictures, and movies. Furthermore, the reverse optical process of that in the display system can be used to record images of the subject from multiple directions. The images can then be transmitted to the display in real-time. We have developed prototypes of a 3D display and a 3D human-image transmission system. Our preliminary working prototypes demonstrate new possibilities of expression and forms of communication.

  13. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc.

  14. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  15. 2D/3D switchable displays

    NASA Astrophysics Data System (ADS)

    Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.

    2006-02-01

    A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.

  16. Spatioangular Prefiltering for Multiview 3D Displays.

    PubMed

    Ramachandra, Vikas; Hirakawa, Keigo; Zwicker, Matthias; Nguyen, Truong

    2011-05-01

    In this paper, we analyze the reproduction of light fields on multiview 3D displays. A three-way interaction between the input light field signal (which is often aliased), the joint spatioangular sampling grids of multiview 3D displays, and the interview light leakage in modern multiview 3D displays is characterized in the joint spatioangular frequency domain. Reconstruction of light fields by all physical 3D displays is prone to light leakage, which means that the reconstruction low-pass filter implemented by the display is too broad in the angular domain. As a result, 3D displays excessively attenuate angular frequencies. Our analysis shows that this reduces sharpness of the images shown in the 3D displays. In this paper, stereoscopic image recovery is recast as a problem of joint spatioangular signal reconstruction. The combination of the 3D display point spread function and human visual system provides the narrow-band low-pass filter which removes spectral replicas in the reconstructed light field on the multiview display. The nonideality of this filter is corrected with the proposed prefiltering. The proposed light field reconstruction method performs light field antialiasing as well as angular sharpening to compensate for the nonideal response of the 3D display. The union of cosets approach which has been used earlier by others is employed here to model the nonrectangular spatioangular sampling grids on a multiview display in a generic fashion. We confirm the effectiveness of our approach in simulation and in physical hardware, and demonstrate improvement over existing techniques.

  17. Influence of limited random-phase of objects on the image quality of 3D holographic display

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  18. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ.

    PubMed

    Wu, Bing; Klatzky, Roberta L; Stetten, George

    2010-03-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.

  19. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  20. Measurement of Contrast Ratios for 3D Display

    DTIC Science & Technology

    2000-07-01

    stereoscopic, autostereoscopic , 3D , display ABSTRACT 3D image display devices have wide applications in medical and entertainment areas. Binocular (stereoscopic...and system crosstalk. In many 3D display systems viewer’ crosstalk is an important issue for good performance, especial in autostereoscopic display...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11343 TITLE: Measurement of Contrast Ratios for 3D Display

  1. 3D touchable holographic light-field display.

    PubMed

    Yamaguchi, Masahiro; Higashida, Ryo

    2016-01-20

    We propose a new type of 3D user interface: interaction with a light field reproduced by a 3D display. The 3D display used in this work reproduces a 3D light field, and a real image can be reproduced in midair between the display and the user. When using a finger to touch the real image, the light field from the display will scatter. Then, the 3D touch sensing is realized by detecting the scattered light by a color camera. In the experiment, the light-field display is constructed with a holographic screen and a projector; thus, a preliminary implementation of a 3D touch is demonstrated.

  2. Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens.

    PubMed

    Shen, Xin; Wang, Yu-Jen; Chen, Hung-Shan; Xiao, Xiao; Lin, Yi-Hsin; Javidi, Bahram

    2015-02-15

    We present a three dimensional (3D) micro integral imaging display system with extended depth of focus by using a polarized bifocal liquid crystal lens. This lens and other optical components are combined as the relay optical element. The focal length of the relay optical element can be controlled to project an elemental image array in multiple positions with various lenslet image planes, by applying different voltages to the liquid crystal lens. The depth of focus of the proposed system can therefore be extended. The feasibility of our proposed system is experimentally demonstrated. In our experiments, the depth of focus of the display system is extended from 3.82 to 109.43 mm.

  3. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    NASA Astrophysics Data System (ADS)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  4. Design of monocular multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2001-06-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have developed a 3D HMD system using the monocular stereoscopic display. This paper shows that the 3D vision system using the monocular stereoscopic display and capturing camera builds a 3D virtual space for a telemanipulation using a captured real 3D image. In this paper, we propose the monocular stereoscopic 3D display and capturing camera for a tele- manipulation system. In addition, we describe the result of depth estimation using the multi-focus retinal images.

  5. An interactive multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  6. Fabrication of Large-Scale Microlens Arrays Based on Screen Printing for Integral Imaging 3D Display.

    PubMed

    Zhou, Xiongtu; Peng, Yuyan; Peng, Rong; Zeng, Xiangyao; Zhang, Yong-Ai; Guo, Tailiang

    2016-09-14

    The low-cost large-scale fabrication of microlens arrays (MLAs) with precise alignment, great uniformity of focusing, and good converging performance are of great importance for integral imaging 3D display. In this work, a simple and effective method for large-scale polymer microlens arrays using screen printing has been successfully presented. The results show that the MLAs possess high-quality surface morphology and excellent optical performances. Furthermore, the microlens' shape and size, i.e., the diameter, the height, and the distance between two adjacent microlenses of the MLAs can be easily controlled by modifying the reflowing time and the size of open apertures of the screen. MLAs with the neighboring microlenses almost tangent can be achieved under suitable size of open apertures of the screen and reflowing time, which can remarkably reduce the color moiré patterns caused by the stray light between the blank areas of the MLAs in the integral imaging 3D display system, exhibiting much better reconstruction performance.

  7. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  8. A New Display Format Relating Azimuth-Scanning Radar Data and All-Sky Images in 3-D

    NASA Technical Reports Server (NTRS)

    Swartz, Wesley E.; Seker, Ilgin; Mathews, John D.; Aponte, Nestor

    2010-01-01

    Here we correlate features in a sequence of all-sky images of 630 nm airglow with the three-dimensional (3-D) structure of electron densities in the F region above Arecibo. Pairs of 180 azimuth scans (using the Gregorian and line feeds) of the two-beam incoherent scatter radar (ISR) have been plotted in cone pictorials of the line-of-sight electron densities. The plots include projections of the 630 nm airglow onto the ground using the same spatial scaling as for the ISR data. Selected sequential images from the night of 16-17 June 2004 correlate ionospheric plasma features with scales comparable to the ISR density-cone diameter. The entire set of over 100 images spanning about eight hours is available as a movie. The correlation between the airglow and the electron densities is not unexpected, but the new display format shows the 3-D structures better than separate 2-D plots in latitude and longitude for the airglow and in height and time for the electron densities. Furthermore, the animations help separate the bands of airglow from obscuring clouds and the star field.

  9. Interactive 3D display simulator for autostereoscopic smart pad

    NASA Astrophysics Data System (ADS)

    Choe, Yeong-Seon; Lee, Ho-Dong; Park, Min-Chul; Son, Jung-Young; Park, Gwi-Tae

    2012-06-01

    There is growing interest of displaying 3D images on a smart pad for entertainments and information services. Designing and realizing various types of 3D displays on the smart pad is not easy for costs and given time. Software simulation can be an alternative method to save and shorten the development. In this paper, we propose a 3D display simulator for autostereoscopic smart pad. It simulates light intensity of each view and crosstalk for smart pad display panels. Designers of 3D display for smart pad can interactively simulate many kinds of autostereoscopic displays interactively by changing parameters required for panel design. Crosstalk to reduce leakage of one eye's image into the image of the other eye, and light intensity for computing visual comfort zone are important factors in designing autostereoscopic display for smart pad. Interaction enables intuitive designs. This paper describes an interactive 3D display simulator for autostereoscopic smart pad.

  10. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  11. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  12. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  13. Spectroradiometric characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    Spectroradiometric measurements have been made for the experimental characterization of the RGB channels of autostereoscopic 3D displays, giving results for different measurement angles with respect to the normal direction of the plane of the display. In the study, 2 different models of autostereoscopic 3D displays of different sizes and resolutions were used, making measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). From the measurements made, goniometric results were recorded for luminance contrast, and the fundamental hypotheses have been evaluated for the characterization of the displays: independence of the RGB channels and their constancy. The results show that the display with the lower angle variability in the contrast-ratio value and constancy of the chromaticity coordinates nevertheless presented the greatest additivity deviations with the measurement angle. For both displays, when the parameters evaluated were taken into account, lower angle variability consistently resulted in the 2D mode than in the 3D mode.

  14. Photorefractive Polymers for Updateable 3D Displays

    DTIC Science & Technology

    2010-02-24

    Final Performance Report 3. DATES COVERED (From - To) 01-01-2007 to 11-30-2009 4. TITLE AND SUBTITLE Photorefractive Polymers for Updateable 3D ...ABSTRACT During the tenure of this project a large area updateable 3D color display has been developed for the first time using a new co-polymer...photorefractive polymers have been demonstrated. Moreover, a 6 inch × 6 inch sample was fabricated demonstrating the feasibility of making large area 3D

  15. US-CT 3D dual imaging by mutual display of the same sections for depicting minor changes in hepatocellular carcinoma.

    PubMed

    Fukuda, Hiroyuki; Ito, Ryu; Ohto, Masao; Sakamoto, Akio; Otsuka, Masayuki; Togawa, Akira; Miyazaki, Masaru; Yamagata, Hitoshi

    2012-09-01

    The purpose of this study was to evaluate the usefulness of ultrasound-computed tomography (US-CT) 3D dual imaging for the detection of small extranodular growths of hepatocellular carcinoma (HCC). The clinical and pathological profiles of 10 patients with single nodular type HCC with extranodular growth (extranodular growth) who underwent a hepatectomy were evaluated using two-dimensional (2D) ultrasonography (US), three-dimensional (3D) US, 3D computed tomography (CT) and 3D US-CT dual images. Raw 3D data was converted to DICOM (Digital Imaging and Communication in Medicine) data using Echo to CT (Toshiba Medical Systems Corp., Tokyo, Japan), and the 3D DICOM data was directly transferred to the image analysis system (ZioM900, ZIOSOFT Inc., Tokyo, Japan). By inputting the angle number (x, y, z) of the 3D CT volume data into the ZioM900, multiplanar reconstruction (MPR) images of the 3D CT data were displayed in a manner such that they resembled the conventional US images. Eleven extranodular growths were detected pathologically in 10 cases. 2D US was capable of depicting only 2 of the 11 extranodular growths. 3D CT was capable of depicting 4 of the 11 extranodular growths. On the other hand, 3D US was capable of depicting 10 of the 11 extranodular growths, and 3D US-CT dual images, which enable the dual analysis of the CT and US planes, revealed all 11 extranodular growths. In conclusion, US-CT 3D dual imaging may be useful for the detection of small extranodular growths.

  16. Research of 3D display using anamorphic optics

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kenji; Honda, Toshio

    1997-05-01

    This paper describes the auto-stereoscopic display which can reconstruct more reality and viewer friendly 3-D image by increasing the number of parallaxes and giving motion parallax horizontally. It is difficult to increase number of parallaxes to give motion parallax to the 3-D image without reducing the resolution, because the resolution of display device is insufficient. The magnification and the image formation position can be selected independently in horizontal direction and the vertical direction by projecting between the display device and the 3-D image with the anamorphic optics. The anamorphic optics is an optics system with different magnification in horizontal direction and the vertical direction. It consists of the combination of cylindrical lenses with different focal length. By using this optics, even if we use a dynamic display such as liquid crystal display (LCD), it is possible to display the realistic 3-D image having motion parallax. Motion parallax is obtained by assuming width of the single parallax at the viewing position to be about the same size as the pupil diameter of viewer. In addition, because the focus depth of the 3-D image is deep in this method, conflict of accommodation and convergence is small, and natural 3-D image can be displayed.

  17. Rear-cross-lenticular 3D display without eyeglasses

    NASA Astrophysics Data System (ADS)

    Morishima, Hideki; Nose, Hiroyasu; Taniguchi, Naosato; Inoguchi, Kazutaka; Matsumura, Susumu

    1998-04-01

    We have developed a prototype 3D Display system without any eyeglasses, which we call `Rear Cross Lenticular 3D Display' (RCL3D), that is very compact and produces high quality 3D image. The RCL3D consists of a LCD panel, two lenticular lens sheets which run perpendicular to each other, a Checkered Pattern Mask and a backlight panel. On the LCD panel, a composite image which consists of alternately arranged horizontally striped images for right eye and left eye, is displayed. This composite image form is compatible with the field sequential stereoscopic image data. The light from backlight panel goes through the apertures of the Checkered Pattern Mask and illuminates the horizontal lines of images for right eye and left eye on LCD and goes to the right eye position and left eye position separately by the function of the two lenticular lens sheets. With this principle, the RCL3D shows 3D image to an observer without any eyeglasses. We applied simulation of viewing zone, using random ray tracing to the RCL3D and found that illuminated areas for right eye and left eye are separated clearly as series of alternate vertical stripes. We will present the prototype of the RCL3D (14.5', XGA) and simulation results.

  18. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  19. Integral imaging-based large-scale full-color 3-D display of holographic data by using a commercial LCD panel.

    PubMed

    Dong, Xiao-Bin; Ai, Ling-Yu; Kim, Eun-Soo

    2016-02-22

    We propose a new type of integral imaging-based large-scale full-color three-dimensional (3-D) display of holographic data based on direct ray-optical conversion of holographic data into elemental images (EIs). In the proposed system, a 3-D scene is modeled as a collection of depth-sliced object images (DOIs), and three-color hologram patterns for that scene are generated by interfering each color DOI with a reference beam, and summing them all based on Fresnel convolution integrals. From these hologram patterns, full-color DOIs are reconstructed, and converted into EIs using a ray mapping-based direct pickup process. These EIs are then optically reconstructed to be a full-color 3-D scene with perspectives on the depth-priority integral imaging (DPII)-based 3-D display system employing a large-scale LCD panel. Experiments with a test video confirm the feasibility of the proposed system in the practical application fields of large-scale holographic 3-D displays.

  20. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  1. Development of a 3D pixel module for an ultralarge screen 3D display

    NASA Astrophysics Data System (ADS)

    Hashiba, Toshihiko; Takaki, Yasuhiro

    2004-10-01

    A large screen 2D display used at stadiums and theaters consists of a number of pixel modules. The pixel module usually consists of 8x8 or 16x16 LED pixels. In this study we develop a 3D pixel module in order to construct a large screen 3D display which is glass-free and has the motion parallax. This configuration for a large screen 3D display dramatically reduces the complexity of wiring 3D pixels. The 3D pixel module consists of several LCD panels, several cylindrical lenses, and one small PC. The LCD panels are slanted in order to differentiate the distances from same color pixels to the axis of the cylindrical lens so that the rays from the same color pixels are refracted into the different horizontal directions by the cylindrical lens. We constructed a prototype 3D pixel module, which consists of 8x4 3D pixels. The prototype module is designed to display 300 different patterns into different horizontal directions with the horizontal display angle pitch of 0.099 degree. The LCD panels are controlled by a small PC and the 3D image data is transmitted through the Gigabit Ethernet.

  2. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  3. 3D display based on parallax barrier with multiview zones.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wang, Jun

    2014-03-01

    A 3D display based on a parallax barrier with multiview zones is proposed. This display consists of a 2D display panel and a parallax barrier. The basic element of the parallax barrier has three narrow slits. They can show three columns of subpixels on the 2D display panel and form 3D pixels. The parallax barrier can provide multiview zones. In these multiview zones, the proposed 3D display can use a small number of views to achieve a high density of views. Therefore, the distance between views is the same as the conventional ones with more views. Considering the proposed display has fewer views, which bring more 3D pixels in the 3D images, the resolution and brightness will be higher than the conventional ones. A 12-view prototype of the proposed 3D display is developed, and it provides the same density of views as a conventional one with 28 views. Experimental results show the proposed display has higher resolution and brightness than the conventional one. The cross talk is also limited at a low level.

  4. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  5. Dual side transparent OLED 3D display using Gabor super-lens

    NASA Astrophysics Data System (ADS)

    Chestak, Sergey; Kim, Dae-Sik; Cho, Sung-Woo

    2015-03-01

    We devised dual side transparent 3D display using transparent OLED panel and two lenticular arrays. The OLED panel is sandwiched between two parallel confocal lenticular arrays, forming Gabor super-lens. The display provides dual side stereoscopic 3D imaging and floating image of the object, placed behind it. The floating image can be superimposed with the displayed 3D image. The displayed autostereoscopic 3D images are composed of 4 views, each with resolution 64x90 pix.

  6. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  7. Real-Time Display Of 3-D Computed Holograms By Scanning The Image Of An Acousto-Optic Modulator

    NASA Astrophysics Data System (ADS)

    Kollin, Joel S.; Benton, Stephen A.; Jepsen, Mary Lou

    1989-10-01

    The invention of holography has sparked hopes for a three-dimensional electronic imaging systems analogous to television. Unfortunately, the extraordinary spatial detail of ordinary holographic recordings requires unattainable bandwidth and display resolution for three-dimensional moving imagery, effectively preventing their commercial development. However, the essential bandwidth of holographic images can be reduced enough to permit their transmission through fiber optic or coaxial cable, and the required resolution or space-bandwidth product of the display can be obtained by raster scanning the image of a commercially available acousto-optic modulator. No film recording or other photographic intermediate step is necessary as the projected modulator image is viewed directly. The design and construction of a working demonstration of the principles involved is also presented along with a discussion of engineering considerations in the system design. Finally, the theoretical and practical limitations of the system are addressed in the context of extending the system to real-time transmission of moving holograms synthesized from views of real and computer-generated three-dimensional scenes.

  8. Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array.

    PubMed

    Kim, Do-Hyeong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Jeong, Ji-Seong; Lee, Jae-Won; Kim, Kyung-Ah; Kim, Nam; Yoo, Kwan-Hee

    2013-12-01

    This paper proposes an open computer language (OpenCL) parallel processing method to generate the elemental image arrays (EIAs) for hexagonal lens array from a three-dimensional (3D) object such as a volume data. Hexagonal lens array has a higher fill factor compared to the rectangular lens array case; however, each pixel of an elemental image should be determined to belong to the single hexagonal lens. Therefore, generation for the entire EIA requires very large computations. The proposed method reduces processing time for the EIAs for a given hexagonal lens array. By using the proposed image space parallel processing (ISPP) method, it can enhance the processing speed that generates the 3D display of real-time interactive integral imaging for hexagonal lens array. In our experiment, we implemented the EIAs for hexagonal lens array in real-time and obtained a good processing time for a large of volume data for multiple cases of lens arrays.

  9. Multi-view 3D display using waveguides

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Chang-Kun

    2015-07-01

    We propose a multi-projection based multi-view 3D display system using an optical waveguide. The images from the projection units with the angle satisfying the total internal reflection (TIR) condition are incident on the waveguide and experience multiple reflections at the interface by the TIR. As a result of the multiple reflections in the waveguide, the projection distance in horizontal direction is effectively reduced to the thickness of the waveguide, and it is possible to implement the compact projection display system. By aligning the projection array in the entrance part of the waveguide, the multi-view 3D display system based on the multiple projectors with the minimized structure is realized. Viewing zones are generated by combining the waveguide projection system, a vertical diffuser, and a Fresnel lens. In the experimental setup, the feasibility of the proposed method is verified and a ten-view 3D display system with compact size in projection space is implemented.

  10. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  11. Implementation of active-type Lamina 3D display system.

    PubMed

    Yoon, Sangcheol; Baek, Hogil; Min, Sung-Wook; Park, Soon-Gi; Park, Min-Kyu; Yoo, Seong-Hyeon; Kim, Hak-Rin; Lee, Byoungho

    2015-06-15

    Lamina 3D display is a new type of multi-layer 3D display, which utilizes the polarization state as a new dimension of depth information. Lamina 3D display system has advanced properties - to reduce the data amount representing 3D image, to be easily made using the conventional projectors, and to have a potential being applied to the many applications. However, the system might have some limitations in depth range and viewing angle due to the properties of the expressive volume components. In this paper, we propose the volume using the layers of switchable diffusers to implement the active-type Lamina 3D display system. Because the diffusing rate of the layers has no relation with the polarization state, the polarizer wheel is applied to the proposed system in purpose of making the sectioned image synchronized with the diffusing layer at the designated location. The imaging volume of the proposed system consists of five layers of polymer dispersed liquid crystal and the total size of the implemented volume is 24x18x12 mm3(3). The proposed system can achieve the improvements of viewing qualities such as enhanced depth expression and widened viewing angle.

  12. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  13. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  14. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  15. Analysis of temporal stability of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    An analysis has been made of the stability of the images generated by electronic autostereoscopic 3D displays, studying the time course of the photometric and colorimetric parameters. The measurements were made on the basis of the procedure recommended in the European guideline EN 61747-6 for the characterization of electronic liquid-crystal displays (LCD). The study uses 3 different models of autostereoscopic 3D displays of different sizes and numbers of pixels, taking the measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). For each of the displays, the time course is shown for the tristimulus values and the chromaticity coordinates in the XYZ CIE 1931 system and values from the time periods required to reach stable values of these parameters are presented. For the analysis of how the procedure recommended in the guideline EN 61747-6 for 2D displays influenced the results, and for the adaption of the procedure to the characterization of 3D displays, the experimental conditions of the standard procedure were varied, making the stability analysis in the two ocular channels (RE and LE) of the 3D mode and comparing the results with those corresponding to the 2D. The results of our study show that the stabilization time of a autostereoscopic 3D display with parallax barrier technology depends on the tristimulus value analysed (X, Y, Z) as well as on the presentation mode (2D, 3D); furthermore, it was found that whether the 3D mode is used depends on the ocular channel evaluated (RE, LE).

  16. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema.

  17. 3D Display Calibration by Visual Pattern Analysis.

    PubMed

    Hwang, Hyoseok; Chang, Hyun Sung; Nam, Dongkyung; Kweon, In So

    2017-02-06

    Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2 s for computation; and is robust to noise, working well in the SNR regime as low as 6 dB.

  18. Computational challenges of emerging novel true 3D holographic displays

    NASA Astrophysics Data System (ADS)

    Cameron, Colin D.; Pain, Douglas A.; Stanley, Maurice; Slinger, Christopher W.

    2000-11-01

    A hologram can produce all the 3D depth cues that the human visual system uses to interpret and perceive real 3D objects. As such it is arguably the ultimate display technology. Computer generated holography, in which a computer calculates a hologram that is then displayed using a highly complex modulator, combines the ultimate qualities of a traditional hologram with the dynamic capabilities of a computer display producing a true 3D real image floating in space. This technology is set to emerge over the next decade, potentially revolutionizing application areas such as virtual prototyping (CAD-CAM, CAID etc.), tactical information displays, data visualization and simulation. In this paper we focus on the computational challenges of this technology. We consider different classes of computational algorithms from true computer-generated holograms (CGH) to holographic stereograms. Each has different characteristics in terms of image qualities, computational resources required, total CGH information content, and system performance. Possible trade- offs will be discussed including reducing the parallax. The software and hardware architectures used to implement the CGH algorithms have many possible forms. Different schemes, from high performance computing architectures to graphics based cluster architectures will be discussed and compared. Assessment will be made of current and future trends looking forward to a practical dynamic CGH based 3D display.

  19. Optical characterization of different types of 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    All 3D displays have the same intrinsic method to induce depth perception. They provide different images in the left and right eye of the observer to obtain the stereoscopic effect. The three most common solutions already available on the market are active glass, passive glass and auto-stereoscopic 3D displays. The three types of displays are based on different physical principle (polarization, time selection or spatial emission) and consequently require different measurement instruments and techniques. In the proposed paper, we present some of these solutions and the technical characteristics that can be obtained to compare the displays. We show in particular that local and global measurements can be made in the three cases to access to different characteristics. We also discuss the new technologies currently under development and their needs in terms of optical characterization.

  20. 3D electrohydrodynamic simulation of electrowetting displays

    NASA Astrophysics Data System (ADS)

    Hsieh, Wan-Lin; Lin, Chi-Hao; Lo, Kuo-Lung; Lee, Kuo-Chang; Cheng, Wei-Yuan; Chen, Kuo-Ching

    2014-12-01

    The fluid dynamic behavior within a pixel of an electrowetting display (EWD) is thoroughly investigated through a 3D simulation. By coupling the electrohydrodynamic (EHD) force deduced from the Maxwell stress tensor with the laminar phase field of the oil-water dual phase, the complete switch processes of an EWD, including the break-up and the electrowetting stages in the switch-on process (with voltage) and the oil spreading in the switch-off process (without voltage), are successfully simulated. By considering the factor of the change in the apparent contact angle at the contact line, the electro-optic performance obtained from the simulation is found to agree well with its corresponding experiment. The proposed model is used to parametrically predict the effect of interfacial (e.g. contact angle of grid) and geometric (e.g. oil thickness and pixel size) properties on the defects of an EWD, such as oil dewetting patterns, oil overflow, and oil non-recovery. With the help of the defect analysis, a highly stable EWD is both experimentally realized and numerically analyzed.

  1. Reality and Surreality of 3-D Displays: Holodeck and Beyond

    DTIC Science & Technology

    2000-01-01

    Holodeck is the reality that significantly better 3D display systems are possible. Keywords: true 3D displays, multiplexed 2D display ( autostereoscopic ...displays still do not use them in their own offices. Thus, 3D approaches that are autostereoscopic (that is, no-head gear is required) are preferred. A...challenges noted throughout the aforegoing sections of this paper will be steadily overcome. True 3D , autostereoscopic (no head gear) monitors with usable

  2. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  3. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  4. True 3D displays for avionics and mission crewstations

    NASA Astrophysics Data System (ADS)

    Sholler, Elizabeth A.; Meyer, Frederick M.; Lucente, Mark E.; Hopper, Darrel G.

    1997-07-01

    3D threat projection has been shown to decrease the human recognition time for events, especially for a jet fighter pilot or C4I sensor operator when the advantage of realization that a hostile threat condition exists is the basis of survival. Decreased threat recognition time improves the survival rate and results from more effective presentation techniques, including the visual cue of true 3D (T3D) display. The concept of 'font' describes the approach adopted here, but whereas a 2D font comprises pixel bitmaps, a T3D font herein comprises a set of hologram bitmaps. The T3D font bitmaps are pre-computed, stored, and retrieved as needed to build images comprising symbols and/or characters. Human performance improvement, hologram generation for a T3D symbol font, projection requirements, and potential hardware implementation schemes are described. The goal is to employ computer-generated holography to create T3D depictions of a dynamic threat environments using fieldable hardware.

  5. Stereopsis has the edge in 3-D displays

    NASA Astrophysics Data System (ADS)

    Piantanida, T. P.

    The results of studies conducted at SRI International to explore differences in image requirements for depth and form perception with 3-D displays are presented. Monocular and binocular stabilization of retinal images was used to separate form and depth perception and to eliminate the retinal disparity input to stereopsis. Results suggest that depth perception is dependent upon illumination edges in the retinal image that may be invisible to form perception, and that the perception of motion-in-depth may be inhibited by form perception, and may be influenced by subjective factors such as ocular dominance and learning.

  6. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  7. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  8. 3-D Imagery Cockpit Display Development

    DTIC Science & Technology

    1990-08-01

    display. is needed. Good information - (3) Change from pictorial gauges to difficult to interpret. word warnings. Display EGT & OIL indicators at all times...indicator. Popped CBs. Information to be changed : Comments: (5) Nothing needs to be changed . Great format. (2) Standardize colors. Display is good. Use all ...sense? Any suggestions for changes ? 6 Pilots Good. 5 Pilots Great! Don’t change the format. 1 Pilot Stores part great. 1 Pilot Provides all the necessary

  9. Format for Interchange and Display of 3D Terrain Data

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Powell, Mark; Vona, Marsette; Norris, Jeffrey; Morrison, Jack

    2004-01-01

    Visible Scalable Terrain (ViSTa) is a software format for production, interchange, and display of three-dimensional (3D) terrain data acquired by stereoscopic cameras of robotic vision systems. ViSTa is designed to support scalability of data, accuracy of displayed terrain images, and optimal utilization of computational resources. In a ViSTa file, an area of terrain is represented, at one or more levels of detail, by coordinates of isolated points and/or vertices of triangles derived from a texture map that, in turn, is derived from original terrain images. Unlike prior terrain-image software formats, ViSTa includes provisions to ensure accuracy of texture coordinates. Whereas many such formats are based on 2.5-dimensional terrain models and impose additional regularity constraints on data, ViSTa is based on a 3D model without regularity constraints. Whereas many prior formats require external data for specifying image-data coordinate systems, ViSTa provides for the inclusion of coordinate-system data within data files. ViSTa admits highspeed loading and display within a Java program. ViSTa is designed to minimize file sizes and maximize compressibility and to support straightforward reduction of resolution to reduce file size for Internet-based distribution.

  10. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  11. 3D head mount display with single panel

    NASA Astrophysics Data System (ADS)

    Wang, Yuchang; Huang, Junejei

    2014-09-01

    The head mount display for entertainment usually requires light weight. But in the professional application has more requirements. The image quality, field of view (FOV), color gamut, response and life time are considered items, too. A head mount display based on 1-chip TI DMD spatial light modulator is proposed. The multiple light sources and splitting images relay system are the major design tasks. The relay system images the object (DMD) into two image planes to crate binocular vision. The 0.65 inch 1080P DMD is adopted. The relay has a good performance which includes the doublet to reduce the chromatic aberration. Some spaces are reserved for placing the mirror and adjustable mechanism. The mirror splits the rays to the left and right image plane. These planes correspond to the eyepieces objects and image to eyes. A changeable mechanism provides the variable interpupillary distance (IPD). The folding optical path makes sure that the HMD center of gravity is close to the head and prevents the uncomfortable downward force being applied to head or orbit. Two RGB LED assemblies illuminate to the DMD in different angle. The light is highly collimated. The divergence angle is small enough such that one LED ray would only enters to the correct eyepiece. This switching is electronic controlled. There is no moving part to produce vibration and fast switch would be possible. Two LED synchronize with 3D video sync by a driving board which also controls the DMD. When the left eye image is displayed on DMD, the LED for left optical path turns on. Vice versa for right image and 3D scene is accomplished.

  12. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  13. Real-time hardware for a new 3D display

    NASA Astrophysics Data System (ADS)

    Kaufmann, B.; Akil, M.

    2006-02-01

    We describe in this article a new multi-view auto-stereoscopic display system with a real time architecture to generate images of n different points of view of a 3D scene. This architecture generates all the different points of view with only one generation process, the different pictures are not generated independently but all at the same time. The architecture generates a frame buffer that contains all the voxels with their three dimensions and regenerates the different pictures on demand from this frame buffer. The need of memory is decreased because there is no redundant information in the buffer.

  14. Visual discomfort caused by color asymmetry in 3D displays

    NASA Astrophysics Data System (ADS)

    Chen, Zaiqing; Huang, Xiaoqiao; Tai, Yonghan; Shi, Junsheng; Yun, Lijun

    2016-10-01

    Color asymmetry is a common phenomenon in 3D displays, which can cause serious visual discomfort. To ensure safe and comfortable stereo viewing, the color difference between the left and right eyes should not exceed a threshold value, named comfortable color difference limit (CCDL). In this paper, we have experimentally measured the CCDL for five sample color points which were selected from the 1976 CIE u'v' chromaticity diagram. By human observers viewing brief presentations of color asymmetry image pairs, a psychophysical experiment is conducted. As the color asymmetry image pairs, left and right circular patches are horizontally adjusted on image pixels with five levels of disparities: 0, ±60, ±120 arc minutes, along six color directions. The experimental results showed that CCDLs for each sample point varied with the level of disparity and color direction. The minimum of CCDL is 0.019Δu' v' , and the maximum of CCDL is 0.133 Δu' v'. The database collected in this study might help 3D system design and 3D content creation.

  15. Will true 3d display devices aid geologic interpretation. [Mirage

    SciTech Connect

    Nelson, H.R. Jr.

    1982-04-01

    A description is given of true 3D display devices and techniques that are being evaluated in various research laboratories around the world. These advances are closely tied to the expected application of 3D display devices as interpretational tools for explorationists. 34 refs.

  16. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  17. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention

  18. 30-view projection 3D display

    NASA Astrophysics Data System (ADS)

    Huang, Junejei; Wang, Yuchang

    2015-03-01

    A 30-view auto-stereoscopic display using angle-magnifying screen is proposed. Small incident angle of Lamp-scanning from exit pupil of projection lens is magnified into large field of view on the observing side. The lamp-scanning is realized by the vibration of Galvano-mirror that synchronizing with the frame rate of the DMD and reflecting the laser illuminator to the scanning angles. To achieve 15-view, a 3-chip DLP projector with frame rate of 720 Hz is used. For one cycle of vibration of Galvano-mirror, steps of 0, 2, 4, 6, 8 10, 12, 14 are reflected on going-path and steps of 13, 11, 9, 7, 5, 3, 1 are reflected on returning path. A frame is divided into two half parts of odd lines and even lines for two views. For each view, 48 half frames per second are provided. A projection lens with aperture-relay module is used to double the lens aperture and separating the frame into two half parts of even and odd lines. After going through the Philips prism, three panels, the scanning 15 spots are doubled to 30 spots and emerge from the exit pupil of the projection lens. The exit 30 light spots from the projection lens are projected to 30 viewing zones by the anglemagnifying screen. A cabinet of rear projection with two folded mirrors is used because a projection lens of long throw distance is required.

  19. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  20. 3-D imaging of the CNS.

    PubMed

    Runge, V M; Gelblum, D Y; Wood, M L

    1990-01-01

    3-D gradient echo techniques, and in particular FLASH, represent a significant advance in MR imaging strategy allowing thin section, high resolution imaging through a large region of interest. Anatomical areas of application include the brain, spine, and extremities, although the majority of work to date has been performed in the brain. Superior T1 contrast and thus sensitivity to the presence of GdDTPA is achieved with 3-D FLASH when compared to 2-D spin echo technique. There is marked arterial and venous enhancement following Gd DTPA administration on 3-D FLASH, a less common finding with 2-D spin echo. Enhancement of the falx and tentorium is also more prominent. From a single data acquisition, requiring less than 11 min of scan time, high resolution reformatted sagittal, coronal, and axial images can obtained in addition to sections in any arbitrary plane. Tissue segmentation techniques can be applied and lesions displayed in three dimensions. These results may lead to the replacement of 2-D spin echo with 3-D FLASH for high resolution T1-weighted MR imaging of the CNS, particularly in the study of mass lesions and structural anomalies. The application of similar T2-weighted gradient echo techniques may follow, however the signal-to-noise ratio which can be achieved remains a potential limitation.

  1. Auto-stereoscopic 3D displays with reduced crosstalk.

    PubMed

    Lee, Chulhee; Seo, Guiwon; Lee, Jonghwa; Han, Tae-hwan; Park, Jong Geun

    2011-11-21

    In this paper, we propose new auto-stereoscopic 3D displays that substantially reduce crosstalk. In general, it is difficult to eliminate crosstalk in auto-stereoscopic 3D displays. Ideally, the parallax barrier can eliminate crosstalk for a single viewer at the ideal position. However, due to variations in the viewing distance and the interpupillary distance, crosstalk is a problem in parallax barrier displays. In this paper, we propose 3-dimensional barriers, which can significantly reduce crosstalk.

  2. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  3. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  4. 3D dynamic holographic display by modulating complex amplitude experimentally.

    PubMed

    Li, Xin; Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-09-09

    Complex amplitude modulation method is presented theoretically and performed experimentally for three-dimensional (3D) dynamic holographic display with reduced speckle using a single phase-only spatial light modulator. The determination of essential factors is discussed based on the basic principle and theory. The numerical simulations and optical experiments are performed, where the static and animated objects without refinement on the surfaces and without random initial phases are reconstructed successfully. The results indicate that this method can reduce the speckle in reconstructed images effectively; furthermore, it will not cause the internal structure in the reconstructed pixels. Since the complex amplitude modulation is based on the principle of phase-only hologram, it does not need the stringent alignment of pixels. This method can be used for high resolution imaging or measurement in various optical areas.

  5. Autostereoscopic 3D flat panel display using an LCD-pixel-associated parallax barrier

    NASA Astrophysics Data System (ADS)

    Chen, En-guo; Guo, Tai-liang

    2014-05-01

    This letter reports an autostereoscopic three-dimensional (3D) flat panel display system employing a newly designed LCD-pixel-associated parallax barrier (LPB). The barrier's parameters can be conveniently determined by the LCD pixels and can help to greatly simplify the conventional design. The optical system of the proposed 3D display is built and simulated to verify the design. For further experimental demonstration, a 508-mm autostereoscopic 3D display prototype is developed and it presents good stereoscopic images. Experimental results agree well with the simulation, which reveals a strong potential for 3D display applications.

  6. Depth-fused 3D imagery on an immaterial display.

    PubMed

    Lee, Cha; Diverdi, Stephen; Höllerer, Tobias

    2009-01-01

    We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The feasibility of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using one or two immaterial screens to create an unencumbered 3D visual that users can penetrate, examining the potential for direct walk-through and reach-through manipulation of the 3D scene. We evaluate the prototype system in formative and summative user studies and report the tolerance thresholds discovered for both tracking and projector errors.

  7. Special subpixel arrangement-based 3D display with high horizontal resolution.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wu, Fei

    2014-11-01

    A special subpixel arrangement-based 3D display is proposed. This display consists of a 2D display panel and a parallax barrier. On the 2D display panel, subpixels have a special arrangement, so they can redefine the formation of color pixels. This subpixel arrangement can bring about triple horizontal resolution for a conventional 2D display panel. Therefore, when these pixels are modulated by the parallax barrier, the 3D images formed also have triple horizontal resolution. A prototype of this display is developed. Experimental results show that this display with triple horizontal resolution can produce a better display effect than the conventional one.

  8. LED projection architectures for stereoscopic and multiview 3D displays

    NASA Astrophysics Data System (ADS)

    Meuret, Youri; Bogaert, Lawrence; Roelandt, Stijn; Vanderheijden, Jana; Avci, Aykut; De Smet, Herbert; Thienpont, Hugo

    2010-04-01

    LED-based projection systems have several interesting features: extended color-gamut, long lifetime, robustness and a fast turn-on time. However, the possibility to develop compact projectors remains the most important driving force to investigate LED projection. This is related to the limited light output of LED projectors that is a consequence of the relative low luminance of LEDs, compared to high intensity discharge lamps. We have investigated several LED projection architectures for the development of new 3D visualization displays. Polarization-based stereoscopic projection displays are often implemented using two identical projectors with passive polarizers at the output of their projection lens. We have designed and built a prototype of a stereoscopic projection system that incorporates the functionality of both projectors. The system uses high-resolution liquidcrystal- on-silicon light valves and an illumination system with LEDs. The possibility to add an extra LED illumination channel was also investigated for this optical configuration. Multiview projection displays allow the visualization of 3D images for multiple viewers without the need to wear special eyeglasses. Systems with large number of viewing zones have already been demonstrated. Such systems often use multiple projection engines. We have investigated a projection architecture that uses only one digital micromirror device and a LED-based illumination system to create multiple viewing zones. The system is based on the time-sequential modulation of the different images for each viewing zone and a special projection screen with micro-optical features. We analyzed the limitations of a LED-based illumination for the investigated stereoscopic and multiview projection systems and discuss the potential of a laser-based illumination.

  9. Comparison of 2D and 3D Displays and Sensor Fusion for Threat Detection, Surveillance, and Telepresence

    DTIC Science & Technology

    2003-05-19

    Comparison of 2D and 3D displays and sensor fusion for threat detection, surveillance, and telepresence T. Meitzler, Ph. D.a, D. Bednarz, Ph.D.a, K...camouflaged threats are compared on a two dimensional (2D) display and a three dimensional ( 3D ) display. A 3D display is compared alongside a 2D...technologies that take advantage of 3D and sensor fusion will be discussed. 1. INTRODUCTION Computer driven interactive 3D imaging has made

  10. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  11. Long-range 3D display using a collimated multi-layer display.

    PubMed

    Park, Soon-Gi; Yamaguchi, Yuta; Nakamura, Junya; Lee, Byoungho; Takaki, Yasuhiro

    2016-10-03

    We propose a long-range three-dimensional (3D) display using a collimated optics with multi-plane configuration. By using a spherical screen and a collimating lens, users observe the collimated image on the spherical screen, which simulates an image plane located at optical infinity. By combining and modulating overlapped multi-plane images, the observed image is located at desired depth position within the volume of multiple planes. The feasibility of the system is demonstrated by an experimental system composed of a planar and a spherical screen with a collimating lens. In addition, accommodation properties of the proposed system are demonstrated according to the depth modulation method.

  12. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  13. Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array.

    PubMed

    Choi, Heejin; Park, Jae-Hyeung; Kim, Joohwan; Cho, Seong-Woo; Lee, Byoungho

    2005-10-17

    A wide-viewing-angle 3D/2D convertible display system with a thin structure is proposed that is able to display three-dimensional and two-dimensional images. With the use of a transparent display device in front of a conventional integral imaging system, it is possible to display planar images using the conventional system as a backlight source. By experiments, the proposed method is proven and compared with the conventional one.

  14. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  15. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  16. Generation of flat viewing zone in DFVZ autostereoscopic multiview 3D display by weighting factor

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ky-Hyuk

    2013-05-01

    A new method is introduced to reduce three crosstalk problems and the brightness variation in 3D image by means of the dynamic fusion of viewing zones (DFVZ) using weighting factor. The new method effectively generates the flat viewing zone at the center of viewing zone. The new type autostereoscopic 3D display can give less brightness variation of 3D image when observer moves.

  17. High-definition 3D display for training applications

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy

    2010-04-01

    In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.

  18. User benefits of visualization with 3-D stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Wichansky, Anna M.

    1991-08-01

    The power of today''s supercomputers promises tremendous benefits to users in terms of productivity, creativity, and excitement in computing. A study of a stereoscopic display system for computer workstations was conducted with 20 users and third-party software developers, to determine whether 3-D stereo displays were perceived as better than flat, 2- 1/2D displays. Users perceived more benefits of 3-D stereo in applications such as molecular modeling and cell biology, which involved viewing of complex, abstract, amorphous objects. Users typically mentioned clearer visualization and better understanding of data, easier recognition of form and pattern, and more fun and excitement at work as the chief benefits of stereo displays. Human factors issues affecting the usefulness of stereo included use of 3-D glasses over regular eyeglasses, difficulties in group viewing, lack of portability, and need for better input devices. The future marketability of 3-D stereo displays would be improved by eliminating the need for users to wear equipment, reducing cost, and identifying markets where the abstract display value can be maximized.

  19. Application of a 3D volumetric display for radiation therapy treatment planning I: quality assurance procedures.

    PubMed

    Gong, Xing; Kirk, Michael Collins; Napoli, Josh; Stutsman, Sandy; Zusag, Tom; Khelashvili, Gocha; Chu, James

    2009-07-17

    To design and implement a set of quality assurance tests for an innovative 3D volumetric display for radiation treatment planning applications. A genuine 3D display (Perspecta Spatial 3D, Actuality-Systems Inc., Bedford, MA) has been integrated with the Pinnacle TPS (Philips Medical Systems, Madison WI), for treatment planning. The Perspecta 3D display renders a 25 cm diameter volume that is viewable from any side, floating within a translucent dome. In addition to displaying all 3D data exported from Pinnacle, the system provides a 3D mouse to define beam angles and apertures and to measure distance. The focus of this work is the design and implementation of a quality assurance program for 3D displays and specific 3D planning issues as guided by AAPM Task Group Report 53. A series of acceptance and quality assurance tests have been designed to evaluate the accuracy of CT images, contours, beams, and dose distributions as displayed on Perspecta. Three-dimensional matrices, rulers and phantoms with known spatial dimensions were used to check Perspecta's absolute spatial accuracy. In addition, a system of tests was designed to confirm Perspecta's ability to import and display Pinnacle data consistently. CT scans of phantoms were used to confirm beam field size, divergence, and gantry and couch angular accuracy as displayed on Perspecta. Beam angles were verified through Cartesian coordinate system measurements and by CT scans of phantoms rotated at known angles. Beams designed on Perspecta were exported to Pinnacle and checked for accuracy. Dose at sampled points were checked for consistency with Pinnacle and agreed within 1% or 1 mm. All data exported from Pinnacle to Perspecta was displayed consistently. The 3D spatial display of images, contours, and dose distributions were consistent with Pinnacle display. When measured by the 3D ruler, the distances between any two points calculated using Perspecta agreed with Pinnacle within the measurement error.

  20. Monocular 3D see-through head-mounted display via complex amplitude modulation.

    PubMed

    Gao, Qiankun; Liu, Juan; Han, Jian; Li, Xin

    2016-07-25

    The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display.

  1. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  2. Front and rear projection autostereoscopic 3D displays based on lenticular sheets

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Zang, Shang-Fei; Qi, Lin

    2015-03-01

    A front projection autostereoscopic display is proposed. The display is composed of eight projectors and a 3D-imageguided screen which having a lenticular sheet and a retro-reflective diffusion screen. Based on the optical multiplexing and de-multiplexing, the optical functions of the 3D-image-guided screen are parallax image interlacing and viewseparating, which is capable of reconstructing 3D images without quality degradation from the front direction. The operating principle, optical design calculation equations and correction method of parallax images are given. A prototype of the front projection autostereoscopic display is developed, which enhances the brightness and 3D perceptions, and improves space efficiency. The performance of this prototype is evaluated by measuring the luminance and crosstalk distribution along the horizontal direction at the optimum viewing distance. We also propose a rear projection autostereoscopic display. The display consists of eight projectors, a projection screen, and two lenticular sheets. The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography. A prototype of the rear projection autostereoscopic display is developed. The normalized luminance distributions of viewing zones from the measurement are given. Results agree well with the designed values. The prototype presents high resolution and high brightness 3D images. The research has potential applications in some commercial entertainments and movies for the realistic 3D perceptions.

  3. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  4. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  5. Data acquirement and remodeling on volumetric 3D emissive display system

    NASA Astrophysics Data System (ADS)

    Yao, Yi; Liu, Xu; Lin, Yuanfang; Zhang, Huangzhu; Zhang, Xiaojie; Liu, Xiangdong

    2005-01-01

    Since present display technology is projecting 3D to 2D, people's eyes are deceived by the loss of spatial data. So it's a revolution for human vision to develop a real 3D display device. The monitor is based on emissive pad with 64*256 LED array. When rotated at a frequency of 10 Hertz, it shows real 3D images with pixels at their exact positions. The article presents a procedure that the software possesses 3D object and converts to volumetric 3D formatted data for this system. For simulating the phenomenon on PC, it also presents a program remodels the object based on OpenGL. An algorithm for faster processing and optimizing rendering speed is also given. The monitor provides real 3D scenes with free visual angle. It can be expected that the revolution will bring a strike on modern monitors and will lead to a new world for display technology.

  6. Optimal 3D Viewing with Adaptive Stereo Displays for Advanced Telemanipulation

    NASA Technical Reports Server (NTRS)

    Lee, S.; Lakshmanan, S.; Ro, S.; Park, J.; Lee, C.

    1996-01-01

    A method of optimal 3D viewing based on adaptive displays of stereo images is presented for advanced telemanipulation. The method provides the viewer with the capability of accurately observing a virtual 3D object or local scene of his/her choice with minimum distortion.

  7. A novel time-multiplexed autostereoscopic multiview full resolution 3D display

    NASA Astrophysics Data System (ADS)

    Liou, Jian-Chiun; Chen, Fu-Hao

    2012-03-01

    Many people believe that in the future, autostereoscopic 3D displays will become a mainstream display type. Achievement of higher quality 3D images requires both higher panel resolution and more viewing zones. Consequently, the transmission bandwidth of the 3D display systems involves enormous amounts of data transfer. We propose and experimentally demonstrate a novel time-multiplexed autostereoscopic multi-view full resolution 3D display based on the lenticular lens array in association with the control of the active dynamic LED backlight. The lenticular lenses of the lens array optical system receive the light and deflect the light into each viewing zone in a time sequence. The crosstalk under different observation scanning angles is showed, including the cases of 4-views field scanning. The crosstalk of any view zones is about 5% respectively; the results are better than other 3D type.

  8. Controllable liquid crystal gratings for an adaptive 2D/3D auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Zhang, Y. A.; Jin, T.; He, L. C.; Chu, Z. H.; Guo, T. L.; Zhou, X. T.; Lin, Z. X.

    2017-02-01

    2D/3D switchable, viewpoint controllable and 2D/3D localizable auto-stereoscopic displays based on controllable liquid crystal gratings are proposed in this work. Using the dual-layer staggered structure on the top substrate and bottom substrate as driven electrodes within a liquid crystal cell, the ratio between transmitting region and shielding region can be selectively controlled by the corresponding driving circuit, which indicates that 2D/3D switch and 3D video sources with different disparity images can reveal in the same auto-stereoscopic display system. Furthermore, the controlled region in the liquid crystal gratings presents 3D model while other regions maintain 2D model in the same auto-stereoscopic display by the corresponding driving circuit. This work demonstrates that the controllable liquid crystal gratings have potential applications in the field of auto-stereoscopic display.

  9. Lamina 3D display: projection-type depth-fused display using polarization-encoded depth information.

    PubMed

    Park, Soon-gi; Yoon, Sangcheol; Yeom, Jiwoon; Baek, Hogil; Min, Sung-Wook; Lee, Byoungho

    2014-10-20

    In order to realize three-dimensional (3D) displays, various multiplexing methods have been proposed to add the depth dimension to two-dimensional scenes. However, most of these methods have faced challenges such as the degradation of viewing qualities, the requirement of complicated equipment, and large amounts of data. In this paper, we further developed our previous concept, polarization distributed depth map, to propose the Lamina 3D display as a method for encoding and reconstructing depth information using the polarization status. By adopting projection optics to the depth encoding system, reconstructed 3D images can be scaled like images of 2D projection displays. 3D reconstruction characteristics of the polarization-encoded images are analyzed with simulation and experiment. The experimental system is also demonstrated to show feasibility of the proposed method.

  10. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  11. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  12. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  13. 3-D display and transmission technologies for telemedicine applications: a review.

    PubMed

    Liu, Qiang; Sclabassi, Robert J; Favalora, Gregg E; Sun, Mingui

    2008-03-01

    Three-dimensional (3-D) visualization technologies have been widely commercialized. These technologies have great potential in a number of telemedicine applications, such as teleconsultation, telesurgery, and remote patient monitoring. This work presents an overview of the state-of-the-art 3-D display devices and related 3-D image/video transmission technologies with the goal of enhancing their utilization in medical applications.

  14. A 360-degree floating 3D display based on light field regeneration.

    PubMed

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-06

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

  15. 3D Navigation and Integrated Hazard Display in Advanced Avionics: Workload, Performance, and Situation Awareness

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Alexander, Amy L.

    2004-01-01

    We examined the ability for pilots to estimate traffic location in an Integrated Hazard Display, and how such estimations should be measured. Twelve pilots viewed static images of traffic scenarios and then estimated the outside world locations of queried traffic represented in one of three display types (2D coplanar, 3D exocentric, and split-screen) and in one of four conditions (display present/blank crossed with outside world present/blank). Overall, the 2D coplanar display best supported both vertical (compared to 3D) and lateral (compared to split-screen) traffic position estimation performance. Costs of the 3D display were associated with perceptual ambiguity. Costs of the split screen display were inferred to result from inappropriate attention allocation. Furthermore, although pilots were faster in estimating traffic locations when relying on memory, accuracy was greatest when the display was available.

  16. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  17. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  18. Fast-response liquid-crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Liu, Yifan; Ren, Hongwen; Xu, Su; Li, Yan; Wu, Shin-Tson

    2014-02-01

    Three-dimensional (3D) display has become an increasingly important technology trend for information display applications. Dozens of different 3D display solutions have been proposed. The autostereoscopic 3D display based on lenticular microlens array is a promising approach, and fast-switching microlens array enables this system to display both 3D and conventional 2D images. Here we report two different fast-response microlens array designs. The first one is a blue phase liquid crystal lens driven by the Pedot: PSS resistive film electrodes. This BPLC lens exhibits several attractive features, such as polarization insensitivity, fast response time, simple driving scheme, and relatively low driving voltage, as compared to other BPLC lens designs. The second lens design has a double-layered structure. The first layer is a polarization dependent polymer microlens array, and the second layer is a thin twisted-nematic (TN) liquid crystal cell. When the TN cell is switched on/off, the traversing light through the polymeric lens array is either focused or defocused, so that 2D/3D images are displayed correspondingly. This lens design has low driving voltage, fast response time, and simple driving scheme. Simulation and experiment demonstrate that the performance of both switchable lenses meet the requirement of 3D display system design.

  19. Standardization based on human factors for 3D display: performance characteristics and measurement methods

    NASA Astrophysics Data System (ADS)

    Uehara, Shin-ichi; Ujike, Hiroyasu; Hamagishi, Goro; Taira, Kazuki; Koike, Takafumi; Kato, Chiaki; Nomura, Toshio; Horikoshi, Tsutomu; Mashitani, Ken; Yuuki, Akimasa; Izumi, Kuniaki; Hisatake, Yuzo; Watanabe, Naoko; Umezu, Naoaki; Nakano, Yoshihiko

    2010-02-01

    We are engaged in international standardization activities for 3D displays. We consider that for a sound development of 3D displays' market, the standards should be based on not only mechanism of 3D displays, but also human factors for stereopsis. However, we think that there is no common understanding on what the 3D display should be and that the situation makes developing the standards difficult. In this paper, to understand the mechanism and human factors, we focus on a double image, which occurs in some conditions on an autostereoscopic display. Although the double image is generally considered as an unwanted effect, we consider that whether the double image is unwanted or not depends on the situation and that there are some allowable double images. We tried to classify the double images into the unwanted and the allowable in terms of the display mechanism and visual ergonomics for stereopsis. The issues associated with the double image are closely related to performance characteristics for the autostereoscopic display. We also propose performance characteristics, measurement and analysis methods to represent interocular crosstalk and motion parallax.

  20. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  1. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  2. Irregular Grid Generation and Rapid 3D Color Display Algorithm

    SciTech Connect

    Wilson D. Chin, Ph.D.

    2000-05-10

    Computationally efficient and fast methods for irregular grid generation are developed to accurately characterize wellbore and fracture boundaries, and farfield reservoir boundaries, in oil and gas petroleum fields. Advanced reservoir simulation techniques are developed for oilfields described by such ''boundary conforming'' mesh systems. Very rapid, three-dimensional color display algorithms are also developed that allow users to ''interrogate'' 3D earth cubes using ''slice, rotate, and zoom'' functions. Based on expert system ideas, the new methods operate much faster than existing display methodologies and do not require sophisticated computer hardware or software. They are designed to operate with PC based applications.

  3. 3D imaging in forensic odontology.

    PubMed

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  4. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  5. Instrument for 3D characterization of autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Prévoteau, J.; Chalençon-Piotin, S.; Debons, D.; Lucas, L.; Remion, Y.

    2011-03-01

    We now have numerous autostereoscopic displays, and it is mandatory to characterize them because it will allow to optimize their performances and to make efficient comparison between them. Therefore we need standards so we have to be able to quantify the quality of the viewer's perception. The purpose of the present paper is twofold; we first present a new instrument of characterization of the 3D perception on a given autostereoscopic display; then we propose a new way to realize an experimental protocol allowing to get a full characterization. This instrument will allow us to compare efficiently the different autostereoscopic displays but it will also validate practically the adequacy between the shooting and rendering geometries. In this aim, we are going to match a perceived scene with the virtual scene. It is hardly possible to determine the scene perceived by a viewer placed in front of an autostereoscopic display. Indeed if it may be executable on the pop-out, it is impossible on the depth effect because the depth of the virtual scene is set behind the screen. Therefore, we will have to use an optical illusion based on the deflection of light by a mirror to know the position which the viewer perceives some points of the virtual scene on an autostereoscopic display.

  6. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  7. Progresses in 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Navarro, Héctor; Pons, Amparo; Javidi, Bahram

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  8. Crosstalk reduction in auto-stereoscopic projection 3D display system.

    PubMed

    Lee, Kwang-Hoon; Park, Youngsik; Lee, Hyoung; Yoon, Seon Kyu; Kim, Sung-Kyu

    2012-08-27

    In auto-stereoscopic multi-views 3D display systems, the crosstalk and low resolution become problems for taking a clear depth image with the sufficient motion parallax. To solve these problems, we propose the projection-type auto-stereoscopic multi-view 3D display system, in which the hybrid optical system with the lenticular-parallax barrier and multi projectors. Condensing width of the projected unit-pixel image within the lenslet by hybrid optics is the core concept in this proposal. As the result, the point crosstalk is improved 53% and resolution is increased up to 5 times.

  9. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  10. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  11. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  12. Optimal projector configuration design for 300-Mpixel multi-projection 3D display.

    PubMed

    Lee, Jin-Ho; Park, Juyong; Nam, Dongkyung; Choi, Seo Young; Park, Du-Sik; Kim, Chang Yeong

    2013-11-04

    To achieve an immersive natural 3D experience on a large screen, a 300-Mpixel multi-projection 3D display that has a 100-inch screen and a 40° viewing angle has been developed. To increase the number of rays emanating from each pixel to 300 in the horizontal direction, three hundred projectors were used. The projector configuration is an important issue in generating a high-quality 3D image, the luminance characteristics were analyzed and the design was optimized to minimize the variation in the brightness of projected images. The rows of the projector arrays were repeatedly changed according to a predetermined row interval and the projectors were arranged in an equi-angular pitch toward the constant central point. As a result, we acquired very smooth motion parallax images without discontinuity. There is no limit of viewing distance, so natural 3D images can be viewed from 2 m to over 20 m.

  13. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  14. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography.

    PubMed

    Hongen Liao; Dohi, Takeyoshi; Nomura, Keisuke

    2011-11-01

    We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.

  15. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  16. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  17. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  18. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.

  19. Future of photorefractive based holographic 3D display

    NASA Astrophysics Data System (ADS)

    Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.

    2010-02-01

    The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.

  20. Optical rotation compensation for a holographic 3D display with a 360 degree horizontal viewing zone.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2016-10-20

    A method for a continuous optical rotation compensation in a time-division-based holographic three-dimensional (3D) display with a rotating mirror is presented. Since the coordinate system of wavefronts after the mirror reflection rotates about the optical axis along with the rotation angle, compensation or cancellation is absolutely necessary to fix the reconstructed 3D object. In this study, we address this problem by introducing an optical image rotator based on a right-angle prism that rotates synchronously with the rotating mirror. The optical and continuous compensation reduces the occurrence of duplicate images, which leads to the improvement of the quality of reconstructed images. The effect of the optical rotation compensation is experimentally verified and a demonstration of holographic 3D display with the optical rotation compensation is presented.

  1. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  2. Active segmentation of 3D axonal images.

    PubMed

    Muralidhar, Gautam S; Gopinath, Ajay; Bovik, Alan C; Ben-Yakar, Adela

    2012-01-01

    We present an active contour framework for segmenting neuronal axons on 3D confocal microscopy data. Our work is motivated by the need to conduct high throughput experiments involving microfluidic devices and femtosecond lasers to study the genetic mechanisms behind nerve regeneration and repair. While most of the applications for active contours have focused on segmenting closed regions in 2D medical and natural images, there haven't been many applications that have focused on segmenting open-ended curvilinear structures in 2D or higher dimensions. The active contour framework we present here ties together a well known 2D active contour model [5] along with the physics of projection imaging geometry to yield a segmented axon in 3D. Qualitative results illustrate the promise of our approach for segmenting neruonal axons on 3D confocal microscopy data.

  3. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  4. 3D brain MR angiography displayed by a multi-autostereoscopic screen

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Ribeiro, Fádua H.; Lima, Fabrício O.; Serra, Rolando L.; Moreno, Alfredo B.; Li, Li M.

    2012-02-01

    The magnetic resonance angiography (MRA) can be used to examine blood vessels in key areas of the body, including the brain. In the MRA, a powerful magnetic field, radio waves and a computer produce the detailed images. Physicians use the procedure in brain images mainly to detect atherosclerosis disease in the carotid artery of the neck, which may limit blood flow to the brain and cause a stroke and identify a small aneurysm or arteriovenous malformation inside the brain. Multi-autostereoscopic displays provide multiple views of the same scene, rather than just two, as in autostereoscopic systems. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The use of 3D imaging in the medical field has proven to be a benefit to doctors when diagnosing patients. For different medical domains a stereoscopic display could be advantageous in terms of a better spatial understanding of anatomical structures, better perception of ambiguous anatomical structures, better performance of tasks that require high level of dexterity, increased learning performance, and improved communication with patients or between doctors. In this work we describe a multi-autostereoscopic system and how to produce 3D MRA images to be displayed with it. We show results of brain MR angiography images discussing, how a 3D visualization can help physicians to a better diagnosis.

  5. Walker Ranch 3D seismic images

    SciTech Connect

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  6. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  7. Tunable nonuniform sampling method for fast calculation and intensity modulation in 3D dynamic holographic display.

    PubMed

    Zhang, Zhao; Liu, Juan; Jia, Jia; Li, Xin; Han, Jian; Hu, Bin; Wang, Yongtian

    2013-08-01

    Heavy computational load of computer-generated hologram (CGH) and imprecise intensity modulation of 3D images are crucial problems in dynamic holographic display. The nonuniform sampling method is proposed to speed up CGH generation and precisely modulate the reconstructed intensities of phase-only CGH. The proposed method can eliminate the redundant information properly, where 70% reduction in the storage amount can be reached when it is combined with the novel lookup table method. Multigrayscale modulation of reconstructed 3D images can be achieved successfully. Numerical simulations and optical experiments are performed, and both are in good agreement. It is believed that the proposed method can be used in 3D dynamic holographic display.

  8. Principle and characteristics of 3D display based on random source constructive interference.

    PubMed

    Li, Zhiyang

    2014-07-14

    The paper discusses the principle and characteristics of 3D display based on random source constructive interference (RSCI). The voxels of discrete 3D images are formed in the air via constructive interference of spherical light waves emitted by point light sources (PLSs) that are arranged at random positions to depress high order diffraction. The PLSs might be created by two liquid crystal panels sandwiched between two micro-lens arrays. The point spread function of the system revealed that it is able to reconstruct voxels with diffraction limited resolution over a large field width and depth. The high resolution was confirmed by the experiments. Theoretical analyses also shows that the system could provide a 3D image contrast and gray levels no less than that in liquid crystal panels. Compared with 2D display, it needs only additional depth information, which brings only about 30% data increment.

  9. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  10. Multiplexing encoding method for full-color dynamic 3D holographic display.

    PubMed

    Xue, Gaolei; Liu, Juan; Li, Xin; Jia, Jia; Zhang, Zhao; Hu, Bin; Wang, Yongtian

    2014-07-28

    The multiplexing encoding method is proposed and demonstrated for reconstructing colorful images accurately by using single phase-only spatial light modulator (SLM). It will encode the light waves at different wavelengths into one pure-phase hologram at the same time based on the analytic formulas. The three-dimensional (3D) images can be reconstructed clearly when the light waves at different wavelengths are incident into the encoding hologram. Numerical simulations and optical experiments for 2D and 3D colorful images are performed. The results show that the colorful reconstructed images with high quality are achieved successfully. The proposed multiplexing method is a simple and fast encoding approach and the size of the system is small and compact. It is expected to be used for realizing full-color 3D holographic display in future.

  11. Compact multi-projection 3D display system with light-guide projection.

    PubMed

    Lee, Chang-Kun; Park, Soon-gi; Moon, Seokil; Hong, Jong-Young; Lee, Byoungho

    2015-11-02

    We propose a compact multi-projection based multi-view 3D display system using an optical light-guide, and perform an analysis of the characteristics of the image for distortion compensation via an optically equivalent model of the light-guide. The projected image traveling through the light-guide experiences multiple total internal reflections at the interface. As a result, the projection distance in the horizontal direction is effectively reduced to the thickness of the light-guide, and the projection part of the multi-projection based multi-view 3D display system is minimized. In addition, we deduce an equivalent model of such a light-guide to simplify the analysis of the image distortion in the light-guide. From the equivalent model, the focus of the image is adjusted, and pre-distorted images for each projection unit are calculated by two-step image rectification in air and the material. The distortion-compensated view images are represented on the exit surface of the light-guide when the light-guide is located in the intended position. Viewing zones are generated by combining the light-guide projection system, a vertical diffuser, and a Fresnel lens. The feasibility of the proposed method is experimentally verified and a ten-view 3D display system with a minimized structure is implemented.

  12. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  13. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  14. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  15. Feasibility of 3D harmonic contrast imaging.

    PubMed

    Voormolen, M M; Bouakaz, A; Krenning, B J; Lancée, C T; ten Cate, F J; de Jong, N

    2004-04-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suitable for contrast imaging. In this study the feasibility of 3D harmonic contrast imaging is evaluated in vitro. A commercially available tissue mimicking flow phantom was used in combination with Sonovue. Backscatter power spectra from a tissue and contrast region of interest were calculated from recorded radio frequency data. The spectra and the extracted contrast to tissue ratio from these spectra were used to optimize the excitation frequency, the pulse length and the receive filter settings of the transducer. Frequencies ranging from 1.66 to 2.35 MHz and pulse lengths of 1.5, 2 and 2.5 cycles were explored. An increase of more than 15 dB in the contrast to tissue ratio was found around the second harmonic compared with the fundamental level at an optimal excitation frequency of 1.74 MHz and a pulse length of 2.5 cycles. Using the optimal settings for 3D harmonic contrast recordings volume measurements of a left ventricular shaped agar phantom were performed. Without contrast the extracted volume data resulted in a volume error of 1.5%, with contrast an accuracy of 3.8% was achieved. The results show the feasibility of accurate volume measurements from 3D harmonic contrast images. Further investigations will include the clinical evaluation of the presented technique for improved assessment of the heart.

  16. Comprehensive evaluation of latest 2D/3D monitors and comparison to a custom-built 3D mirror-based display in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Wilhelm, Dirk; Reiser, Silvano; Kohn, Nils; Witte, Michael; Leiner, Ulrich; Mühlbach, Lothar; Ruschin, Detlef; Reiner, Wolfgang; Feussner, Hubertus

    2014-03-01

    Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation.

  17. Stereoscopic 3D display with color interlacing improves perceived depth.

    PubMed

    Kim, Joohwan; Johnson, Paul V; Banks, Martin S

    2014-12-29

    Temporal interlacing is a method for presenting stereoscopic 3D content whereby the two eyes' views are presented at different times and optical filtering selectively delivers the appropriate view to each eye. This approach is prone to distortions in perceived depth because the visual system can interpret the temporal delay between binocular views as spatial disparity. We propose a novel color-interlacing display protocol that reverses the order of binocular presentation for the green primary but maintains the order for the red and blue primaries: During the first sub-frame, the left eye sees the green component of the left-eye view and the right eye sees the red and blue components of the right-eye view, and vice versa during the second sub-frame. The proposed method distributes the luminance of each eye's view more evenly over time. Because disparity estimation is based primarily on luminance information, a more even distribution of luminance over time should reduce depth distortion. We conducted a psychophysical experiment to test these expectations and indeed found that less depth distortion occurs with color interlacing than temporal interlacing.

  18. 3D imaging system for biometric applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Abramovich, Gil; Paruchura, Vijay; Manickam, Swaminathan; Vemury, Arun

    2010-04-01

    There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.

  19. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  20. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor

    NASA Astrophysics Data System (ADS)

    Taherkhani, Reza; Kia, Mohammad

    2012-09-01

    This paper describes the design and building of a low cost and practical stereoscopic display that does not need to wear special glasses, and uses eye tracking to give a large degree of freedom to viewer (or viewer's) movement while displaying the minimum amount of information. The parallax barrier technique is employed to turn a LCD into an auto-stereoscopic display. The stereo image pair is screened on the usual liquid crystal display simultaneously but in different columns of pixels. Controlling of the display in red-green-blue sub pixels increases the accuracy of light projecting direction to less than 2 degrees without losing too much LCD's resolution and an eye-tracking system determines the correct angle to project the images along the viewer's eye pupils and an image processing system puts the 3D images data in correct R-G-B sub pixels. 1.6 degree of light direction controlling achieved in practice. The 3D monitor is just made by applying some simple optical materials on a usual LCD display with normal resolution. [Figure not available: see fulltext.

  1. Focus-tunable multi-view holographic 3D display using a 4k LCD panel

    NASA Astrophysics Data System (ADS)

    Lin, Qiaojuan; Sang, Xinzhu; Chen, Zhidong; Yan, Binbin; Yu, Chongxiu; Wang, Peng; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    A focus-tunable multi-view holographic three-dimensional (3D) display system with a 10.1 inch 4K liquid crystal device (LCD) panel is presented. In the proposed synthesizing method, computer-generated hologram (CGH) does not require calculations of light diffraction. When multiple rays pass through one point of a 3D image and enter the pupil simultaneously, the eyes can focus on the point according to the depth cue. Benefiting from the holograms, the dense multiple perspective viewpoints of the 3D object are recorded and combined into the CGH in a dense-super-view way, which make two or more rays emitted from the same point in reconstructed light field into the pupil simultaneously. In general, a wavefront is converged to a viewpoint with the amplitude distribution of multi-view images on the hologram plane, and the phase distribution of a spherical wave is converged to the viewpoint. Here, the wavefronts are calculated according to all the multi-view images and then they are summed up to obtain the object wave on the hologram plane. Moreover, the reference light (converging light) is adopted to converge the central diffraction wave from the liquid crystal display (LCD) into a common area in a short view distance. Experimental results shows that the proposed holographic display can regenerate the 3D objects with focus cues: accommodation and retinal blur.

  2. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.

  3. Full-color autostereoscopic 3D display system using color-dispersion-compensated synthetic phase holograms.

    PubMed

    Choi, Kyongsik; Kim, Hwi; Lee, Byoungho

    2004-10-18

    A novel full-color autostereoscopic three-dimensional (3D) display system has been developed using color-dispersion-compensated (CDC) synthetic phase holograms (SPHs) on a phase-type spatial light modulator. To design the CDC phase holograms, we used a modified iterative Fourier transform algorithm with scaling constants and phase quantization level constraints. We obtained a high diffraction efficiency (~90.04%), a large signal-to-noise ratio (~9.57dB), and a low reconstruction error (~0.0011) from our simulation results. Each optimized phase hologram was synthesized with each CDC directional hologram for red, green, and blue wavelengths for full-color autostereoscopic 3D display. The CDC SPHs were composed and modulated by only one phase-type spatial light modulator. We have demonstrated experimentally that the designed CDC SPHs are able to generate full-color autostereoscopic 3D images and video frames very well, without any use of glasses.

  4. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  5. Depth cues in human visual perception and their realization in 3D displays

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Häussler, Ralf; Fütterer, Gerald; Leister, Norbert

    2010-04-01

    Over the last decade, various technologies for visualizing three-dimensional (3D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric, or holographic type. Most of the current approaches utilize the conventional stereoscopic principle. But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue. This paper discusses the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth limitations of 3D displays from a physiological point of view.

  6. Characteristics measurement methodology of the large-size autostereoscopic 3D LED display

    NASA Astrophysics Data System (ADS)

    An, Pengli; Su, Ping; Zhang, Changjie; Cao, Cong; Ma, Jianshe; Cao, Liangcai; Jin, Guofan

    2014-11-01

    Large-size autostereoscopic 3D LED displays are commonly used in outdoor or large indoor space, and have the properties of long viewing distance and relatively low light intensity at the viewing distance. The instruments used to measure the characteristics (crosstalk, inconsistency, chromatic dispersion, etc.) of the displays should have long working distance and high sensitivity. In this paper, we propose a methodology for characteristics measurement based on a distribution photometer with a working distance of 5.76m and the illumination sensitivity of 0.001 mlx. A display panel holder is fabricated and attached on the turning stage of the distribution photometer. Specific test images are loaded on the display separately, and the luminance data at the distance of 5.76m to the panel are measured. Then the data are transformed into the light intensity at the optimum viewing distance. According to definitions of the characteristics of the 3D displays, the crosstalk, inconsistency, chromatic dispersion could be calculated. The test results and analysis of the characteristics of an autostereoscopic 3D LED display are proposed.

  7. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  8. Split image optical display

    DOEpatents

    Veligdan, James T.

    2005-05-31

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  9. Split image optical display

    DOEpatents

    Veligdan, James T.

    2007-05-29

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  10. XVD Image Display Program

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul

    2009-01-01

    The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.

  11. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  12. Development of high-frame-rate LED panel and its applications for stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Tsutsumi, M.; Yamamoto, R.; Kajimoto, K.; Suyama, S.

    2011-03-01

    In this paper, we report development of a high-frame-rate LED display. Full-color images are refreshed at 480 frames per second. In order to transmit such a high frame-rate signal via conventional 120-Hz DVI, we have introduced a spatiotemporal mapping of image signal. A processor of LED image signal and FPGAs in LED modules have been reprogrammed so that four adjacent pixels in the input image are converted into successive four fields. The pitch of LED panel is 20 mm. The developed 480-fps LED display is utilized for stereoscopic 3D display by use of parallax barrier. The horizontal resolution of a viewed image decreases to one-half by the parallax barrier. This degradation is critical for LED because the pitch of LED displays is as large as tens of times of other flat panel displays. We have conducted experiments to improve quality of the viewed image through the parallax barrier. The improvement is based on interpolation by afterimages. It is shown that the HFR LED provides detailed afterimages. Furthermore, the HFR LED has been utilized for unconscious imaging, which provide a sensation of discovery of conscious visual information from unconscious images.

  13. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  14. Field lens multiplexing in holographic 3D displays by using Bragg diffraction based volume gratings

    NASA Astrophysics Data System (ADS)

    Fütterer, G.

    2016-11-01

    Applications, which can profit from holographic 3D displays, are the visualization of 3D data, computer-integrated manufacturing, 3D teleconferencing and mobile infotainment. However, one problem of holographic 3D displays, which are e.g. based on space bandwidth limited reconstruction of wave segments, is to realize a small form factor. Another problem is to provide a reasonable large volume for the user placement, which means to provide an acceptable freedom of movement. Both problems should be solved without decreasing the image quality of virtual and real object points, which are generated within the 3D display volume. A diffractive optical design using thick hologram gratings, which can be referred to as Bragg diffraction based volume gratings, can provide a small form factor and high definition natural viewing experience of 3D objects. A large collimated wave can be provided by an anamorphic backlight unit. The complex valued spatial light modulator add local curvatures to the wave field he is illuminated with. The modulated wave field is focused onto to the user plane by using a volume grating based field lens. Active type liquid crystal gratings provide 1D fine tracking of approximately +/- 8° deg. Diffractive multiplex has to be implemented for each color and for a set of focus functions providing coarse tracking. Boundary conditions of the diffractive multiplexing are explained. This is done in regards to the display layout and by using the coupled wave theory (CWT). Aspects of diffractive cross talk and its suppression will be discussed including longitudinal apodized volume gratings.

  15. Polymeric-lens-embedded 2D/3D switchable display with dramatically reduced crosstalk.

    PubMed

    Zhu, Ruidong; Xu, Su; Hong, Qi; Wu, Shin-Tson; Lee, Chiayu; Yang, Chih-Ming; Lo, Chang-Cheng; Lien, Alan

    2014-03-01

    A two-dimensional/three-dimensional (2D/3D) display system is presented based on a twisted-nematic cell integrated polymeric microlens array. This device structure has the advantages of fast response time and low operation voltage. The crosstalk of the system is analyzed in detail and two approaches are proposed to reduce the crosstalk: a double lens system and the prism approach. Illuminance distribution analysis proves these two approaches can dramatically reduce crosstalk, thus improving image quality.

  16. Practical resolution requirements of measurement instruments for precise characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Collomb-Patton, Véronique; Bignon, Thibault

    2014-03-01

    Different ways to evaluate the optical performances of auto-stereoscopic 3D displays are reviewed. Special attention is paid to the crosstalk measurements that can be performed by measuring, either the precise angular emission at one or few locations on the display surface, or the full display surface emission from very specific locations in front of the display. Using measurements made in the two ways with different instruments on different auto-stereoscopic displays, we show that measurement instruments need to match the resolution of the human eye to obtain reliable results in both cases. Practical requirements in terms of angular resolution for viewing angle measurement instruments and in terms of spatial resolution for imaging instruments are derived and verified on practical examples.

  17. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  18. Evaluation of passive polarized stereoscopic 3D display for visual & mental fatigues.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Mumtaz, Wajid; Badruddin, Nasreen; Kamel, Nidal

    2015-01-01

    Visual and mental fatigues induced by active shutter stereoscopic 3D (S3D) display have been reported using event-related brain potentials (ERP). An important question, that is whether such effects (visual & mental fatigues) can be found in passive polarized S3D display, is answered here. Sixty-eight healthy participants are divided into 2D and S3D groups and subjected to an oddball paradigm after being exposed to S3D videos with passive polarized display or 2D display. The age and fluid intelligence ability of the participants are controlled between the groups. ERP results do not show any significant differences between S3D and 2D groups to find the aftereffects of S3D in terms of visual and mental fatigues. Hence, we conclude that passive polarized S3D display technology may not induce visual and/or mental fatigue which may increase the cognitive load and suppress the ERP components.

  19. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  20. Key factors in the design of a LED volumetric 3D display system

    NASA Astrophysics Data System (ADS)

    Lin, Yuanfang; Liu, Xu; Yao, Yi; Zhang, Xiaojie; Liu, Xiangdong; Lin, Fengchun

    2005-01-01

    Through careful consideration of key factors that impact upon voxel attributes and image quality, a volumetric three-dimensional (3D) display system employing the rotation of a two-dimensional (2D) thin active panel was developed. It was designed as a lower-cost 3D visualization platform for experimentation and demonstration. Light emitting diodes (LEDs) were arranged into a 256x64 dot matrix on a single surface of the panel, which was positioned symmetrically about the axis of rotation. The motor and necessary supporting structures were located below the panel. LEDs individually of 500 ns response time, 1.6 mm×0.8 mm×0.6 mm external dimensions, 0.38 mm×0.43 mm horizontal and vertical spacing were adopted. The system is functional, providing 512×256×64, i.e. over 8 million addressable voxels within a 292 mm×165 mm cylindrical volume at a refresh frequency in excess of 16 Hz. Due to persistence of vision, momentarily addressed voxels will be perceived and fused into a 3D image. Many static or dynamic 3D scenes were displayed, which can be directly viewed from any position with few occlusion zones and dead zones. Important depth cues like binocular disparity and motion parallax are satisfied naturally.

  1. Full parallax viewing-angle enhanced computer-generated holographic 3D display system using integral lens array.

    PubMed

    Choi, Kyongsik; Kim, Joohwan; Lim, Yongjun; Lee, Byoungho

    2005-12-26

    A novel full parallax and viewing-angle enhanced computer-generated holographic (CGH) three-dimensional (3D) display system is proposed and implemented by combining an integral lens array and colorized synthetic phase holograms displayed on a phase-type spatial light modulator. For analyzing the viewing-angle limitations of our CGH 3D display system, we provide some theoretical background and introduce a simple ray-tracing method for 3D image reconstruction. From our method we can get continuously varying full parallax 3D images with the viewing angle about +/-6 degrees . To design the colorized phase holograms, we used a modified iterative Fourier transform algorithm and we could obtain a high diffraction efficiency (~92.5%) and a large signal-to-noise ratio (~11dB) from our simulation results. Finally we show some experimental results that verify our concept and demonstrate the full parallax viewing-angle enhanced color CGH display system.

  2. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  3. Investigation of a 3D head-mounted projection display using retro-reflective screen.

    PubMed

    Héricz, Dalma; Sarkadi, Tamás; Lucza, Viktor; Kovács, Viktor; Koppa, Pál

    2014-07-28

    We propose a compact head-worn 3D display which provides glasses-free full motion parallax. Two picoprojectors placed on the viewer's head project images on a retro-reflective screen that reflects left and right images to the appropriate eyes of the viewer. The properties of different retro-reflective screen materials have been investigated, and the key parameters of the projection - brightness and cross-talk - have been calculated. A demonstration system comprising two projectors, a screen tracking system and a commercial retro-reflective screen has been developed to test the visual quality of the proposed approach.

  4. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  5. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  6. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  7. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes.

  8. Autonomic nervous system responses can reveal visual fatigue induced by 3D displays.

    PubMed

    Kim, Chi Jung; Park, Sangin; Won, Myeung Ju; Whang, Mincheol; Lee, Eui Chul

    2013-09-26

    Previous research has indicated that viewing 3D displays may induce greater visual fatigue than viewing 2D displays. Whether viewing 3D displays can evoke measureable emotional responses, however, is uncertain. In the present study, we examined autonomic nervous system responses in subjects viewing 2D or 3D displays. Autonomic responses were quantified in each subject by heart rate, galvanic skin response, and skin temperature. Viewers of both 2D and 3D displays showed strong positive correlations with heart rate, which indicated little differences between groups. In contrast, galvanic skin response and skin temperature showed weak positive correlations with average difference between viewing 2D and 3D. We suggest that galvanic skin response and skin temperature can be used to measure and compare autonomic nervous responses in subjects viewing 2D and 3D displays.

  9. Realization of an aerial 3D image that occludes the background scenery.

    PubMed

    Kakeya, Hideki; Ishizuka, Shuta; Sato, Yuya

    2014-10-06

    In this paper we describe an aerial 3D image that occludes far background scenery based on coarse integral volumetric imaging (CIVI) technology. There have been many volumetric display devices that present floating 3D images, most of which have not reproduced the visual occlusion. CIVI is a kind of multilayered integral imaging and realizes an aerial volumetric image with visual occlusion by combining multiview and volumetric display technologies. The conventional CIVI, however, cannot show a deep space, for the number of layered panels is limited because of the low transmittance of each panel. To overcome this problem, we propose a novel optical design to attain an aerial 3D image that occludes far background scenery. In the proposed system, a translucent display panel with 120 Hz refresh rate is located between the CIVI system and the aerial 3D image. The system modulates between the aerial image mode and the background image mode. In the aerial image mode, the elemental images are shown on the CIVI display and the inserted translucent display is uniformly translucent. In the background image mode, the black shadows of the elemental images in a white background are shown on the CIVI display and the background scenery is displayed on the inserted translucent panel. By alternation of these two modes at 120 Hz, an aerial 3D image that visually occludes the far background scenery is perceived by the viewer.

  10. A time-sequential autostereoscopic 3D display using a vertical line dithering for utilizing the side lobes

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jin; Park, Minyoung

    2014-11-01

    In spite of the developments of various autostereoscopic three-dimensional (3D) technologies, the inferior resolution of the realized 3D image is a severe problem that should be resolved. For that purpose, a time-sequential 3D display is developed to provide 3D images with higher resolution and attracts much attention. Among them, a method using a directional backlight unit (DBLU) is an effective way to be adopted in liquid crystal display (LCD) with higher frame rate such as 120Hz. However, in the conventional time-sequential system, the insufficient frame rate results a flicker problem which means a recognizable fluctuation of image brightness. A dot dithering method can be a good solution for reducing that problem but it was impossible to observe the 3D image in side lobes because the image data and the directivity of light rays from the DBLU do not match in side lobes. In this paper, we propose a new vertical line dithering method to expand the area for 3D image observation by utilizing the side lobes. Since the side lobes locate in the left and right position of the center lobe, it is needed to arrange the image data in LCD panel and directivity of the light rays from the DBLU to have continuity in horizontal direction. Although the observed 3D images in side lobes are flipped ones, the utilization of the side lobes can increase the number of observers in horizontal direction.

  11. Stereoscopic 3D display technique using spatiotemporal interlacing has improved spatial and temporal properties.

    PubMed

    Johnson, Paul V; Kim, Joohwan; Banks, Martin S

    2015-04-06

    Stereoscopic 3D (S3D) displays use spatial or temporal interlacing to send different images to the two eyes. Temporal interlacing delivers images to the left and right eyes alternately in time; it has high effective spatial resolution but is prone to temporal artifacts. Spatial interlacing delivers even pixel rows to one eye and odd rows to the other eye simultaneously; it is subject to spatial limitations such as reduced spatial resolution. We propose a spatiotemporal-interlacing protocol that interlaces the left- and right-eye views spatially, but with the rows being delivered to each eye alternating with each frame. We performed psychophysical experiments and found that flicker, motion artifacts, and depth distortion are substantially reduced relative to the temporal-interlacing protocol, and spatial resolution is better than in the spatial-interlacing protocol. Thus, the spatiotemporal-interlacing protocol retains the benefits of spatial and temporal interlacing while minimizing or even eliminating the drawbacks.

  12. Stereoscopic 3D display technique using spatiotemporal interlacing has improved spatial and temporal properties

    PubMed Central

    Johnson, Paul V.; Kim, Joohwan; Banks, Martin S.

    2015-01-01

    Stereoscopic 3D (S3D) displays use spatial or temporal interlacing to send different images to the two eyes. Temporal interlacing delivers images to the left and right eyes alternately in time; it has high effective spatial resolution but is prone to temporal artifacts. Spatial interlacing delivers even pixel rows to one eye and odd rows to the other eye simultaneously; it is subject to spatial limitations such as reduced spatial resolution. We propose a spatiotemporal-interlacing protocol that interlaces the left- and right-eye views spatially, but with the rows being delivered to each eye alternating with each frame. We performed psychophysical experiments and found that flicker, motion artifacts, and depth distortion are substantially reduced relative to the temporal-interlacing protocol, and spatial resolution is better than in the spatial-interlacing protocol. Thus, the spatiotemporal-interlacing protocol retains the benefits of spatial and temporal interlacing while minimizing or even eliminating the drawbacks. PMID:25968758

  13. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  14. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  15. Depth-of-Focus Affects 3D Perception in Stereoscopic Displays.

    PubMed

    Vienne, Cyril; Blondé, Laurent; Mamassian, Pascal

    2015-01-01

    Stereoscopic systems present binocular images on planar surface at a fixed distance. They induce cues to flatness, indicating that images are presented on a unique surface and specifying the relative depth of that surface. The center of interest of this study is on a second problem, arising when a 3D object distance differs from the display distance. As binocular disparity must be scaled using an estimate of viewing distance, object depth can thus be affected through disparity scaling. Two previous experiments revealed that stereoscopic displays can affect depth perception due to conflicting accommodation and vergence cues at near distances. In this study, depth perception is evaluated for farther accommodation and vergence distances using a commercially available 3D TV. In Experiment I, we evaluated depth perception of 3D stimuli at different vergence distances for a large pool of participants. We observed a strong effect of vergence distance that was bigger for younger than for older participants, suggesting that the effect of accommodation was reduced in participants with emerging presbyopia. In Experiment 2, we extended 3D estimations by varying both the accommodation and vergence distances. We also tested the hypothesis that setting accommodation open loop by constricting pupil size could decrease the contribution of focus cues to perceived distance. We found that the depth constancy was affected by accommodation and vergence distances and that the accommodation distance effect was reduced with a larger depth-of-focus. We discuss these results with regard to the effectiveness of focus cues as a distance signal. Overall, these results highlight the importance of appropriate focus cues in stereoscopic displays at intermediate viewing distances.

  16. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  17. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  18. Technical solutions for a full-resolution autostereoscopic 2D/3D display technology

    NASA Astrophysics Data System (ADS)

    Stolle, Hagen; Olaya, Jean-Christophe; Buschbeck, Steffen; Sahm, Hagen; Schwerdtner, Armin

    2008-02-01

    Auto-stereoscopic 3D displays capable of high quality, full-resolution images for multiple users can only be created with time-sequential systems incorporating eye tracking and a dedicated optical design. The availability of high speed displays with 120Hz and faster eliminated one of the major hurdles for commercial solutions. Results of alternative display solutions from SeeReal show the impact of optical design on system performance and product features. Depending on the manufacturer's capabilities, system complexity can be shifted from optics to SLM with an impact on viewing angle, number of users and energy efficiency, but also on manufacturing processes. A proprietary solution for eye tracking from SeeReal demonstrates that the required key features can be achieved and implemented in commercial systems in a reasonably short time.

  19. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  20. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  1. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  2. Multi-user 3D display using a head tracker and RGB laser illumination source

    NASA Astrophysics Data System (ADS)

    Surman, Phil; Sexton, Ian; Hopf, Klaus; Bates, Richard; Lee, Wing Kai; Buckley, Edward

    2007-05-01

    A glasses-free (auto-stereoscopic) 3D display that will serve several viewers who have freedom of movement over a large viewing region is described. This operates on the principle of employing head position tracking to provide regions referred to as exit pupils that follow the positions ofthe viewers' eyes in order for appropriate left and right images to be seen. A non-intrusive multi-user head tracker controls the light sources of a specially designed backlight that illuminates a direct-view LCD.

  3. Compact multi-projection 3D display using a wedge prism

    NASA Astrophysics Data System (ADS)

    Park, Soon-gi; Lee, Chang-Kun; Lee, Byoungho

    2015-03-01

    We propose a compact multi-projection system based on integral floating method with waveguide projection. Waveguide projection can reduce the projection distance by multiple folding of optical path inside the waveguide. The proposed system is composed of a wedge prism, which is used as a waveguide, multiple projection-units, and an anisotropic screen made of floating lens combined with a vertical diffuser. As the projected image propagates through the wedge prism, it is reflected at the surfaces of prism by total internal reflections, and the final view image is created by the floating lens at the viewpoints. The position of view point is decided by the lens equation, and the interval of view point is calculated by the magnification of collimating lens and interval of projection-units. We believe that the proposed method can be useful for implementing a large-scale autostereoscopic 3D system with high quality of 3D images using projection optics. In addition, the reduced volume of the system will alleviate the restriction of installment condition, and will widen the applications of a multi-projection 3D display.

  4. Design and Perception Testing of a Novel 3-D Autostereoscopic Holographic Display System

    DTIC Science & Technology

    1999-01-01

    developing an autostereoscopic , 3D holographic visual display system. The current holographic system is being used to conduct 3D visual perception studies...Design and Perception Testing of a Novel 3-D Autostereoscopic Holographic Display System Grace M. Bochenek a, Thomas J. Meitzler b, Paul Muench...Warren, MI 48397-5000 ABSTRACT U.S. Army Tank-Automotive Command (TACOM) researchers are in the early stages of developing an autostereoscopic

  5. A guide for human factors research with stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Pinkus, Alan R.

    2015-05-01

    In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.

  6. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  7. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance.

    PubMed

    Qiu, Jimmy; Hope, Andrew J; Cho, B C John; Sharpe, Michael B; Dickie, Colleen I; DaCosta, Ralph S; Jaffray, David A; Weersink, Robert A

    2012-10-21

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ∼2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  8. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  9. Monocular accommodation condition in 3D display types through geometrical optics

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  10. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  11. New approach on calculating multiview 3D crosstalk for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Jung, Sung-Min; Lee, Kyeong-Jin; Kang, Ji-Na; Lee, Seung-Chul; Lim, Kyoung-Moon

    2012-03-01

    In this study, we suggest a new concept of 3D crosstalk for auto-stereoscopic displays and obtain 3D crosstalk values of several multi-view systems based on the suggested definition. First, we measure the angular dependencies of the luminance for auto-stereoscopic displays under various test patterns corresponding to each view of a multi-view system and then calculate the 3D crosstalk based on our new definition with respect to the measured luminance profiles. Our new approach gives just a single 3D crosstalk value for single device without any ambiguity and shows similar order of values to the conventional stereoscopic displays. These results are compared with the conventional 3D crosstalk values of selected auto-stereoscopic displays such as 4-view and 9-view systems. From the result, we believe that this new approach is very useful for controlling 3D crosstalk values of the 3D displays manufacturing and benchmarking of the 3D performances among the various auto-stereoscopic displays.

  12. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  13. Research of range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Yang, Haitao; Zhao, Hongli; Youchen, Fan

    2016-10-01

    Laser image data-based target recognition technology is one of the key technologies of laser active imaging systems. This paper discussed the status quo of 3-D imaging development at home and abroad, analyzed the current technological bottlenecks, and built a prototype of range-gated systems to obtain a set of range-gated slice images, and then constructed the 3-D images of the target by binary method and centroid method, respectively, and by constructing different numbers of slice images explored the relationship between the number of images and the reconstruction accuracy in the 3-D image reconstruction process. The experiment analyzed the impact of two algorithms, binary method and centroid method, on the results of 3-D image reconstruction. In the binary method, a comparative analysis was made on the impact of different threshold values on the results of reconstruction, where 0.1, 0.2, 0.3 and adaptive threshold values were selected for 3-D reconstruction of the slice images. In the centroid method, 15, 10, 6, 3, and 2 images were respectively used to realize 3-D reconstruction. Experimental results showed that with the same number of slice images, the accuracy of centroid method was higher than the binary algorithm, and the binary algorithm had a large dependence on the selection of threshold; with the number of slice images dwindling, the accuracy of images reconstructed by centroid method continued to reduce, and at least three slice images were required in order to obtain one 3-D image.

  14. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  15. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  16. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  17. Design of extended viewing zone at autostereoscopic 3D display based on diffusing optical element

    NASA Astrophysics Data System (ADS)

    Kim, Min Chang; Hwang, Yong Seok; Hong, Suk-Pyo; Kim, Eun Soo

    2012-03-01

    In this paper, to realize a non-glasses type 3D display as next step from the current glasses-typed 3D display, it is suggested that a viewing zone is designed for the 3D display using DOE (Diffusing Optical Element). Viewing zone of proposed method is larger than that of the current parallax barrier method or lenticular method. Through proposed method, it is shown to enable the expansion and adjustment of the area of viewing zone according to viewing distance.

  18. Color Flat Panel Displays: 3D Autostereoscopic Brassboard and Field Sequential Illumination Technology.

    DTIC Science & Technology

    1997-06-01

    DTI has advanced autostereoscopic and field sequential color (FSC) illumination technologies for flat panel displays. Using a patented backlight...technology, DTI has developed prototype 3D flat panel color display that provides stereoscopic viewing without the need for special glasses or other... autostereoscopic viewing. Discussions of system architecture, critical component specifications and resultant display characteristics are provided. Also

  19. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  20. 3D-Holoscopic Imaging: A New Dimension to Enhance Imaging in Minimally Invasive Therapy in Urologic Oncology

    PubMed Central

    Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar

    2013-01-01

    Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging. PMID:23216303

  1. Multiview holographic 3D dynamic display by combining a nano-grating patterned phase plate and LCD.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Ye, Yan; Chen, Xiangyu; Chen, Linsen

    2017-01-23

    Limited by the refreshable data volume of commercial spatial light modulator (SLM), electronic holography can hardly provide satisfactory 3D live video. Here we propose a holography based multiview 3D display by separating the phase information of a lightfield from the amplitude information. In this paper, the phase information was recorded by a 5.5-inch 4-view phase plate with a full coverage of pixelated nano-grating arrays. Because only amplitude information need to be updated, the refreshing data volume in a 3D video display was significantly reduced. A 5.5 inch TFT-LCD with a pixel size of 95 μm was used to modulate the amplitude information of a lightfield at a rate of 20 frames per second. To avoid crosstalk between viewing points, the spatial frequency and orientation of each nano-grating in the phase plate was fine tuned. As a result, the transmission light converged to the viewing points. The angular divergence was measured to be 1.02 degrees (FWHM) by average, slightly larger than the diffraction limit of 0.94 degrees. By refreshing the LCD, a series of animated sequential 3D images were dynamically presented at 4 viewing points. The resolution of each view was 640 × 360. Images for each viewing point were well separated and no ghost images were observed. The resolution of the image and the refreshing rate in the 3D dynamic display can be easily improved by employing another SLM. The recoded 3D videos showed the great potential of the proposed holographic 3D display to be used in mobile electronics.

  2. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  3. Subsampling models and anti-alias filters for 3-D automultiscopic displays.

    PubMed

    Konrad, Janusz; Agniel, Philippe

    2006-01-01

    A new type of three-dimensional (3-D) display recently introduced on the market holds great promise for the future of 3-D visualization, communication, and entertainment. This so-called automultiscopic display can deliver multiple views without glasses, thus allowing a limited "look-around" (correct motion-parallax). Central to this technology is the process of multiplexing several views into a single viewable image. This multiplexing is a complex process involving irregular subsampling of the original views. If not preceded by low-pass filtering, it results in aliasing that leads to texture as well as depth distortions. In order to eliminate this aliasing, we propose to model the multiplexing process with lattices, find their parameters and then design optimal anti-alias filters. To this effect, we use multidimensional sampling theory and basic optimization tools. We derive optimal anti-alias filters for a specific automultiscopic monitor using three models: the orthogonal lattice, the nonorthogonal lattice, and the union of shifted lattices. In the first case, the resulting separable low-pass filter offers significant aliasing reduction that is further improved by hexagonal-passband low-pass filter for the nonorthogonal lattice model. A more accurate model is obtained using union of shifted lattices, but due to the complex nature of repeated spectra, practical filters designed in this case offer no additional improvement. We also describe a practical method to design finite-precision, low-complexity filters that can be implemented using modern graphics cards.

  4. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  5. Reproducibility of crosstalk measurements on active glasses 3D LCD displays based on temporal characterization

    NASA Astrophysics Data System (ADS)

    Tourancheau, Sylvain; Wang, Kun; Bułat, Jarosław; Cousseau, Romain; Janowski, Lucjan; Brunnström, Kjell; Barkowsky, Marcus

    2012-03-01

    Crosstalk is one of the main display-related perceptual factors degrading image quality and causing visual discomfort on 3D-displays. It causes visual artifacts such as ghosting effects, blurring, and lack of color fidelity which are considerably annoying and can lead to difficulties to fuse stereoscopic images. On stereoscopic LCD with shutter-glasses, crosstalk is mainly due to dynamic temporal aspects: imprecise target luminance (highly dependent on the combination of left-view and right-view pixel color values in disparity regions) and synchronization issues between shutter-glasses and LCD. These different factors influence largely the reproducibility of crosstalk measurements across laboratories and need to be evaluated in several different locations involving similar and differing conditions. In this paper we propose a fast and reproducible measurement procedure for crosstalk based on high-frequency temporal measurements of both display and shutter responses. It permits to fully characterize crosstalk for any right/left color combination and at any spatial position on the screen. Such a reliable objective crosstalk measurement method at several spatial positions is considered a mandatory prerequisite for evaluating the perceptual influence of crosstalk in further subjective studies.

  6. Analysis of multiple recording methods for full resolution multi-view autostereoscopic 3D display system incorporating VHOE

    NASA Astrophysics Data System (ADS)

    Hwang, Yong Seok; Cho, Kyu Ha; Kim, Eun Soo

    2014-03-01

    In this paper, we propose multiple recording process of photopolymer for a full-color multi-view including multiple-view auto-stereoscopic 3D display system based on VHOE (Volume Holographic Optical Element). To overcome the problems such as low resolution, and limited viewing zone of conventional 3D-display without glasses, we designed multiple recording condition of VHOE for multi-view display. It is verified that VHOE may be optically made by angle-multiplexed recording of pre-designed multiple-viewing zone that uniformly is recorded through optimized exposuretime scheduling scheme. Here, VHOE-based backlight system for 4-view stereoscopic display is implemented, in which the output beams that playing a role reference beam from LGP(Light guide plate)t may be sequentially synchronized with the respective stereo images displayed on the LCD panel.

  7. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  8. See-through integral imaging display with background occlusion capability.

    PubMed

    Yamaguchi, Yuta; Takaki, Yasuhiro

    2016-01-20

    Background occlusion capability is provided to a flat-panel-type integral imaging display that has a transparent screen and can superimpose three-dimensional (3D) images on real scenes. A symmetric integral imaging system that comprises two integral imaging systems connected by an additional lens array, is proposed. Elementary images are displayed on a flat-panel display on one integral imaging system to generate 3D images, and the occlusion mask patterns are displayed on a flat-panel display on the other integral imaging system to selectively block rays from background scenes. The proposed system was constructed and experimentally verified.

  9. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  10. Free segmentation in rendered 3D images through synthetic impulse response in integral imaging

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.

    2016-06-01

    Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.

  11. Recent research results in stereo 3-D pictorial displays at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.

    1990-01-01

    Recent results from a NASA-Langley program which addressed stereo 3D pictorial displays from a comprehensive standpoint are reviewed. The program dealt with human factors issues and display technology aspects, as well as flight display applications. The human factors findings include addressing a fundamental issue challenging the application of stereoscopic displays in head-down flight applications, with the determination that stereoacuity is unaffected by the short-term use of stereo 3D displays. While stereoacuity has been a traditional measurement of depth perception abilities, it is a measure of relative depth, rather than actual depth (absolute depth). Therefore, depth perception effects based on size and distance judgments and long-term stereo exposure remain issues to be investigated. The applications of stereo 3D to pictorial flight displays within the program have repeatedly demonstrated increases in pilot situational awareness and task performance improvements. Moreover, these improvements have been obtained within the constraints of the limited viewing volume available with conventional stereo displays. A number of stereo 3D pictorial display applications are described, including recovery from flight-path offset, helicopter hover, and emulated helmet-mounted display.

  12. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  13. Multivalent 3D Display of Glycopolymer Chains for Enhanced Lectin Interaction.

    PubMed

    Lin, Kenneth; Kasko, Andrea M

    2015-08-19

    Synthetic glycoprotein conjugates were synthesized through the polymerization of glycomonomers (mannose and/or galactose acrylate) directly from a protein macroinitiator. This design combines the multivalency of polymer structures with 3D display of saccharides randomly arranged around a central protein structure. The conjugates were tested for their interaction with mannose binding lectin (MBL), a key protein of immune complement. Increasing mannose number (controlled through polymer chain length) and density (controlled through comonomer feed ratio of mannose versus galactose) result in greater interaction with MBL. Most significantly, mannose glycopolymers displayed in a multivalent and 3D configuration from the protein exhibit dramatically enhanced interaction with MBL compared to linear glycopolymer chains with similar total valency but lacking 3D display. These findings demonstrate the importance of the 3D presentation of ligand structures for designing biomimetic materials.

  14. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  15. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  16. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  17. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  18. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-09

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

  19. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  20. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  1. Multiview and light-field reconstruction algorithms for 360° multiple-projector-type 3D display.

    PubMed

    Zhong, Qing; Peng, Yifan; Li, Haifeng; Su, Chen; Shen, Weidong; Liu, Xu

    2013-07-01

    Both multiview and light-field reconstructions are proposed for a multiple-projector 3D display system. To compare the performance of the reconstruction algorithms in the same system, an optimized multiview reconstruction algorithm with sub-view-zones (SVZs) is proposed. The algorithm divided the conventional view zones in multiview display into several SVZs and allocates more view images. The optimized reconstruction algorithm unifies the conventional multiview reconstruction and light-field reconstruction algorithms, which can indicate the difference in performance when multiview reconstruction is changed to light-field reconstruction. A prototype consisting of 60 projectors with an arc diffuser as its screen is constructed to verify the algorithms. Comparison of different configurations of SVZs shows that light-field reconstruction provides large-scale 3D images with the smoothest motion parallax; thus it may provide better overall performance for large-scale 360° display than multiview reconstruction.

  2. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  3. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  4. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  5. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  6. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  7. The optimizations of CGH generation algorithms based on multiple GPUs for 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Yang, Dan; Liu, Juan; Zhang, Yingxi; Li, Xin; Wang, Yongtian

    2016-10-01

    Holographic display has been considered as a promising display technology. Currently, low-speed generation of holograms with big holographic data is one of crucial bottlenecks for three dimensional (3D) dynamic holographic display. To solve this problem, the acceleration method computation platform is presented based on look-up table point source method. The computer generated holograms (CGHs) acquisition is sped up by offline file loading and inline calculation optimization, where a pure phase CGH with gigabyte data is encoded to record an object with 10 MB sampling data. Both numerical simulation and optical experiment demonstrate that the CGHs with 1920×1080 resolution by the proposed method can be applied to the 3D objects reconstruction with high quality successfully. It is believed that the CGHs with huge data can be generated by the proposed method with high speed for 3D dynamic holographic display in near future.

  8. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    DTIC Science & Technology

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  9. Morphometrics, 3D Imaging, and Craniofacial Development

    PubMed Central

    Hallgrimsson, Benedikt; Percival, Christopher J.; Green, Rebecca; Young, Nathan M.; Mio, Washington; Marcucio, Ralph

    2017-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  10. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  11. Full-parallax 3D display from single-shot Kinect capture

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Dorado, Adrián.; Saavedra, Genaro; Martínez-Corral, Manuel; Shin, Donghak; Lee, Byung-Gook

    2015-05-01

    We propose the fusion between two concepts that are very successful in the area of 3D imaging and sensing. Kinect technology permits the registration, in real time, but with low resolution, of accurate depth maps of big, opaque, diffusing 3D scenes. Our proposal consists on transforming the sampled depth map, provided by the Kinect technology, into an array of microimages whose position; pitch and resolution are in good accordance with the characteristics of an integral- imaging monitor. By projecting this information onto such monitor we are able to produce 3D images with continuous perspective and full parallax.

  12. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  13. 3D Whole Heart Imaging for Congenital Heart Disease

    PubMed Central

    Greil, Gerald; Tandon, Animesh (Aashoo); Silva Vieira, Miguel; Hussain, Tarique

    2017-01-01

    Three-dimensional (3D) whole heart techniques form a cornerstone in cardiovascular magnetic resonance imaging of congenital heart disease (CHD). It offers significant advantages over other CHD imaging modalities and techniques: no ionizing radiation; ability to be run free-breathing; ECG-gated dual-phase imaging for accurate measurements and tissue properties estimation; and higher signal-to-noise ratio and isotropic voxel resolution for multiplanar reformatting assessment. However, there are limitations, such as potentially long acquisition times with image quality degradation. Recent advances in and current applications of 3D whole heart imaging in CHD are detailed, as well as future directions. PMID:28289674

  14. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  15. A colour image reproduction framework for 3D colour printing

    NASA Astrophysics Data System (ADS)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  16. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  17. Synthesis and display of dynamic holographic 3D scenes with real-world objects.

    PubMed

    Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea; Näsänen, Risto; Naughton, Thomas J; Ferraro, Pietro

    2010-04-26

    A 3D scene is synthesized combining multiple optically recorded digital holograms of different objects. The novel idea consists of compositing moving 3D objects in a dynamic 3D scene using a process that is analogous to stop-motion video. However in this case the movie has the exciting attribute that it can be displayed and observed in 3D. We show that 3D dynamic scenes can be projected as an alternative to complicated and heavy computations needed to generate realistic-looking computer generated holograms. The key tool for creating the dynamic action is based on a new concept that consists of a spatial, adaptive transformation of digital holograms of real-world objects allowing full control in the manipulation of the object's position and size in a 3D volume with very high depth-of-focus. A pilot experiment to evaluate how viewers perceive depth in a conventional single-view display of the dynamic 3D scene has been performed.

  18. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  19. Digital holography and 3D imaging: introduction to feature issue.

    PubMed

    Kim, Myung K; Hayasaki, Yoshio; Picart, Pascal; Rosen, Joseph

    2013-01-01

    This feature issue of Applied Optics on Digital Holography and 3D Imaging is the sixth of an approximately annual series. Forty-seven papers are presented, covering a wide range of topics in phase-shifting methods, low coherence methods, particle analysis, biomedical imaging, computer-generated holograms, integral imaging, and many others.

  20. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  1. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  2. Assessment of eye fatigue caused by 3D displays based on multimodal measurements.

    PubMed

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-09-04

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively.

  3. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  4. DCT and DST Based Image Compression for 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  5. 3D Subharmonic Ultrasound Imaging In Vitro and In Vivo

    PubMed Central

    Eisenbrey, John R.; Sridharan, Anush; Machado, Priscilla; Zhao, Hongjia; Halldorsdottir, Valgerdur G.; Dave, Jaydev K.; Liu, Ji-Bin; Park, Suhyun; Dianis, Scott; Wallace, Kirk; Thomenius, Kai E.; Forsberg, F.

    2012-01-01

    Rationale and Objectives While contrast-enhanced ultrasound imaging techniques such as harmonic imaging (HI) have evolved to reduce tissue signals using the nonlinear properties of the contrast agent, levels of background suppression have been mixed. Subharmonic imaging (SHI) offers near-complete tissue suppression by centering the receive bandwidth at half the transmitting frequency. In this work we demonstrate the feasibility of 3D SHI and compare it to 3D HI. Materials and Methods 3D HI and SHI were implemented on a Logiq 9 ultrasound scanner (GE Healthcare, Milwaukee, Wisconsin) with a 4D10L probe. Four-cycle SHI was implemented to transmit at 5.8 MHz and receive at 2.9 MHz, while 2-cycle HI was implemented to transmit at 5 MHz and receive at 10 MHz. The ultrasound contrast agent Definity (Lantheus Medical Imaging, North Billerica, MA) was imaged within a flow phantom and the lower pole of two canine kidneys in both HI and SHI modes. Contrast to tissue ratios (CTR) and rendered images were compared offline. Results SHI resulted in significant improvement in CTR levels relative to HI both in vitro (12.11±0.52 vs. 2.67±0.77, p<0.001) and in vivo (5.74±1.92 vs. 2.40±0.48, p=0.04). Rendered 3D SHI images provided better tissue suppression and a greater overall view of vessels in a flow phantom and canine renal vasculature. Conclusions The successful implementation of SHI in 3D allows imaging of vascular networks over a heterogeneous sample volume and should improve future diagnostic accuracy. Additionally, 3D SHI provides improved CTR values relative to 3D HI. PMID:22464198

  6. Low Dose, Low Energy 3d Image Guidance during Radiotherapy

    NASA Astrophysics Data System (ADS)

    Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

    2006-04-01

    Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

  7. Accelerated 3D catheter visualization from triplanar MR projection images.

    PubMed

    Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

    2010-07-01

    One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment.

  8. A new way to characterize autostereoscopic 3D displays using Fourier optics instrument

    NASA Astrophysics Data System (ADS)

    Boher, P.; Leroux, T.; Bignon, T.; Collomb-Patton, V.

    2009-02-01

    Auto-stereoscopic 3D displays offer presently the most attractive solution for entertainment and media consumption. Despite many studies devoted to this type of technology, efficient characterization methods are still missing. We present here an innovative optical method based on high angular resolution viewing angle measurements with Fourier optics instrument. This type of instrument allows measuring the full viewing angle aperture of the display very rapidly and accurately. The system used in the study presents a very high angular resolution below 0.04 degree which is mandatory for this type of characterization. We can predict from the luminance or color viewing angle measurements of the different views of the 3D display what will be seen by an observer at any position in front of the display. Quality criteria are derived both for 3D and standard properties at any observer position and Qualified Stereo Viewing Space (QSVS) is determined. The use of viewing angle measurements at different locations on the display surface during the observer computation gives more realistic estimation of QSVS and ensures its validity for the entire display surface. Optimum viewing position, viewing freedom, color shifts and standard parameters are also quantified. Simulation of the moire issues can be made leading to a better understanding of their origin.

  9. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  10. Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations

    PubMed Central

    Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

    2008-01-01

    We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost. PMID:17024836

  11. Exposing digital image forgeries by 3D reconstruction technology

    NASA Astrophysics Data System (ADS)

    Wang, Yongqiang; Xu, Xiaojing; Li, Zhihui; Liu, Haizhen; Li, Zhigang; Huang, Wei

    2009-11-01

    Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method, based on 3D reconstruction technology, which detect the forgeries by discriminating the dimensional relationship of each object appeared on image, is presented in this paper. This detection method includes three steps. In the first step, all the parameters of images were calibrated and each crucial object on image was chosen and matched. In the second step, the 3D coordinates of each object were calculated by bundle adjustment. In final step, the dimensional relationship of each object was analyzed. Experiments were designed to test this detection method; the 3D reconstruction and the forged image 3D reconstruction were computed independently. Test results show that the fabricating character in digital forgeries can be identified intuitively by this method.

  12. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  13. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  14. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  15. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  16. 3D Image Reconstruction: Determination of Pattern Orientation

    SciTech Connect

    Blankenbecler, Richard

    2003-03-13

    The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.

  17. Accuracy of 3D Imaging Software in Cephalometric Analysis

    DTIC Science & Technology

    2013-06-21

    Imaging and Communication in Medicine ( DICOM ) files into personal computer-based software to enable 3D reconstruction of the craniofacial skeleton. These...tissue profile. CBCT data can be imported as DICOM files into personal computer–based software to provide 3D reconstruction of the craniofacial...been acquired for the three pig models. The CBCT data were exported into DICOM multi-file format. They will be imported into a proprietary

  18. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  19. Gastric Contraction Imaging System Using a 3-D Endoscope.

    PubMed

    Yoshimoto, Kayo; Yamada, Kenji; Watabe, Kenji; Takeda, Maki; Nishimura, Takahiro; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Nishida, Tsutomu; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko

    2014-01-01

    This paper presents a gastric contraction imaging system for assessment of gastric motility using a 3-D endoscope. Gastrointestinal diseases are mainly based on morphological abnormalities. However, gastrointestinal symptoms are sometimes apparent without visible abnormalities. One of the major factors for these diseases is abnormal gastrointestinal motility. For assessment of gastric motility, a gastric motility imaging system is needed. To assess the dynamic motility of the stomach, the proposed system measures 3-D gastric contractions derived from a 3-D profile of the stomach wall obtained with a developed 3-D endoscope. After obtaining contraction waves, their frequency, amplitude, and speed of propagation can be calculated using a Gaussian function. The proposed system was evaluated for 3-D measurements of several objects with known geometries. The results showed that the surface profiles could be obtained with an error of [Formula: see text] of the distance between two different points on images. Subsequently, we evaluated the validity of a prototype system using a wave simulated model. In the experiment, the amplitude and position of waves could be measured with 1-mm accuracy. The present results suggest that the proposed system can measure the speed and amplitude of contractions. This system has low invasiveness and can assess the motility of the stomach wall directly in a 3-D manner. Our method can be used for examination of gastric morphological and functional abnormalities.

  20. See-through multi-view 3D display with parallax barrier

    NASA Astrophysics Data System (ADS)

    Hong, Jong-Young; Lee, Chang-Kun; Park, Soon-gi; Kim, Jonghyun; Cha, Kyung-Hoon; Kang, Ki Hyung; Lee, Byoungho

    2016-03-01

    In this paper, we propose the see-through parallax barrier type multi-view display with transparent liquid crystal display (LCD). The transparency of LCD is realized by detaching the backlight unit. The number of views in the proposed system is minimized to enlarge the aperture size of parallax barrier, which determines the transparency. For compensating the shortness of the number of viewpoints, eye tracking method is applied to provide large number of views and vertical parallax. Through experiments, a prototype of see-through autostereoscopic 3D display with parallax barrier is implemented, and the system parameters of transmittance, crosstalk, and barrier structure perception are analyzed.

  1. Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.

    PubMed

    Saveljev, Vladimir; Kim, Sung-Kyu

    2015-10-05

    The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.

  2. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  3. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  4. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  5. 3D quantitative analysis of brain SPECT images

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Ceskovic, Ivan; Petrovic, Ratimir; Loncaric, Srecko

    2001-07-01

    The main purpose of this work is to develop a computer-based technique for quantitative analysis of 3-D brain images obtained by single photon emission computed tomography (SPECT). In particular, the volume and location of ischemic lesion and penumbra is important for early diagnosis and treatment of infracted regions of the brain. SPECT imaging is typically used as diagnostic tool to assess the size and location of the ischemic lesion. The segmentation method presented in this paper utilizes a 3-D deformable model in order to determine size and location of the regions of interest. The evolution of the model is computed using a level-set implementation of the algorithm. In addition to 3-D deformable model the method utilizes edge detection and region growing for realization of a pre-processing. Initial experimental results have shown that the method is useful for SPECT image analysis.

  6. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  7. Interactive visualization of multiresolution image stacks in 3D.

    PubMed

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  8. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  9. Episcopic 3D Imaging Methods: Tools for Researching Gene Function

    PubMed Central

    Weninger, Wolfgang J; Geyer, Stefan H

    2008-01-01

    This work aims at describing episcopic 3D imaging methods and at discussing how these methods can contribute to researching the genetic mechanisms driving embryogenesis and tissue remodelling, and the genesis of pathologies. Several episcopic 3D imaging methods exist. The most advanced are capable of generating high-resolution volume data (voxel sizes from 0.5x0.5x1 µm upwards) of small to large embryos of model organisms and tissue samples. Beside anatomy and tissue architecture, gene expression and gene product patterns can be three dimensionally analyzed in their precise anatomical and histological context with the aid of whole mount in situ hybridization or whole mount immunohistochemical staining techniques. Episcopic 3D imaging techniques were and are employed for analyzing the precise morphological phenotype of experimentally malformed, randomly produced, or genetically engineered embryos of biomedical model organisms. It has been shown that episcopic 3D imaging also fits for describing the spatial distribution of genes and gene products during embryogenesis, and that it can be used for analyzing tissue samples of adult model animals and humans. The latter offers the possibility to use episcopic 3D imaging techniques for researching the causality and treatment of pathologies or for staging cancer. Such applications, however, are not yet routine and currently only preliminary results are available. We conclude that, although episcopic 3D imaging is in its very beginnings, it represents an upcoming methodology, which in short terms will become an indispensable tool for researching the genetic regulation of embryo development as well as the genesis of malformations and diseases. PMID:19452045

  10. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  11. Proposed traceable structural resolution protocols for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David; Beraldin, J.-Angelo; Cournoyer, Luc; Carrier, Benjamin; Blais, François

    2009-08-01

    A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.

  12. Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements

    PubMed Central

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-01-01

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315

  13. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  14. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  15. 3D EFT imaging with planar electrode array: Numerical simulation

    NASA Astrophysics Data System (ADS)

    Tuykin, T.; Korjenevsky, A.

    2010-04-01

    Electric field tomography (EFT) is the new modality of the quasistatic electromagnetic sounding of conductive media recently investigated theoretically and realized experimentally. The demonstrated results pertain to 2D imaging with circular or linear arrays of electrodes (and the linear array provides quite poor quality of imaging). In many applications 3D imaging is essential or can increase value of the investigation significantly. In this report we present the first results of numerical simulation of the EFT imaging system with planar array of electrodes which allows 3D visualization of the subsurface conductivity distribution. The geometry of the system is similar to the geometry of our EIT breast imaging system providing 3D conductivity imaging in form of cross-sections set with different depth from the surface. The EFT principle of operation and reconstruction approach differs from the EIT system significantly. So the results of numerical simulation are important to estimate if comparable quality of imaging is possible with the new contactless method. The EFT forward problem is solved using finite difference time domain (FDTD) method for the 8×8 square electrodes array. The calculated results of measurements are used then to reconstruct conductivity distributions by the filtered backprojections along electric field lines. The reconstructed images of the simple test objects are presented.

  16. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  17. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  18. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  19. Full-color 3D display using binary phase modulation and speckle reduction

    NASA Astrophysics Data System (ADS)

    Matoba, Osamu; Masuda, Kazunobu; Harada, Syo; Nitta, Kouichi

    2016-06-01

    One of the 3D display systems for full-color reconstruction by using binary phase modulation is presented. The improvement of reconstructed objects is achieved by optimizing the binary phase modulation and accumulating the speckle patterns by changing the random phase distributions. The binary phase pattern is optimized by the modified Frenel ping-pong algorithm. Numerical and experimental demonstrations of full color reconstruction are presented.

  20. 3D display for enhanced tele-operation and other applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-04-01

    In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  1. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  2. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  3. Noninvasive computational imaging of cardiac electrophysiology for 3-D infarct.

    PubMed

    Wang, Linwei; Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng

    2011-04-01

    Myocardial infarction (MI) creates electrophysiologically altered substrates that are responsible for ventricular arrhythmias, such as tachycardia and fibrillation. The presence, size, location, and composition of infarct scar bear significant prognostic and therapeutic implications for individual subjects. We have developed a statistical physiological model-constrained framework that uses noninvasive body-surface-potential data and tomographic images to estimate subject-specific transmembrane-potential (TMP) dynamics inside the 3-D myocardium. In this paper, we adapt this framework for the purpose of noninvasive imaging, detection, and quantification of 3-D scar mass for postMI patients: the framework requires no prior knowledge of MI and converges to final subject-specific TMP estimates after several passes of estimation with intermediate feedback; based on the primary features of the estimated spatiotemporal TMP dynamics, we provide 3-D imaging of scar tissue and quantitative evaluation of scar location and extent. Phantom experiments were performed on a computational model of realistic heart-torso geometry, considering 87 transmural infarct scars of different sizes and locations inside the myocardium, and 12 compact infarct scars (extent between 10% and 30%) at different transmural depths. Real-data experiments were carried out on BSP and magnetic resonance imaging (MRI) data from four postMI patients, validated by gold standards and existing results. This framework shows unique advantage of noninvasive, quantitative, computational imaging of subject-specific TMP dynamics and infarct mass of the 3-D myocardium, with the potential to reflect details in the spatial structure and tissue composition/heterogeneity of 3-D infarct scar.

  4. Refraction Correction in 3D Transcranial Ultrasound Imaging

    PubMed Central

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  5. 3D Imaging of Density Gradients Using Plenoptic BOS

    NASA Astrophysics Data System (ADS)

    Klemkowsky, Jenna; Clifford, Chris; Fahringer, Timothy; Thurow, Brian

    2016-11-01

    The combination of background oriented schlieren (BOS) and a plenoptic camera, termed Plenoptic BOS, is explored through two proof-of-concept experiments. The motivation of this work is to provide a 3D technique capable of observing density disturbances. BOS uses the relationship between density and refractive index gradients to observe an apparent shift in a patterned background through image comparison. Conventional BOS systems acquire a single line-of-sight measurement, and require complex configurations to obtain 3D measurements, which are not always conducive to experimental facilities. Plenoptic BOS exploits the plenoptic camera's ability to generate multiple perspective views and refocused images from a single raw plenoptic image during post processing. Using such capabilities, with regards to BOS, provides multiple line-of-sight measurements of density disturbances, which can be collectively used to generate refocused BOS images. Such refocused images allow the position of density disturbances to be qualitatively and quantitatively determined. The image that provides the sharpest density gradient signature corresponds to a specific depth. These results offer motivation to advance Plenoptic BOS with an ultimate goal of reconstructing a 3D density field.

  6. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  7. 3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation.

    PubMed

    Yeom, Han-Ju; Kim, Hee-Jae; Kim, Seong-Bok; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo; Park, Jae-Hyeung

    2015-12-14

    We propose a bar-type three-dimensional holographic head mounted display using two holographic optical elements. Conventional stereoscopic head mounted displays may suffer from eye fatigue because the images presented to each eye are two-dimensional ones, which causes mismatch between the accommodation and vergence responses of the eye. The proposed holographic head mounted display delivers three-dimensional holographic images to each eye, removing the eye fatigue problem. In this paper, we discuss the configuration of the bar-type waveguide head mounted displays and analyze the aberration caused by the non-symmetric diffraction angle of the holographic optical elements which are used as input and output couplers. Pre-distortion of the hologram is also proposed in the paper to compensate the aberration. The experimental results show that proposed head mounted display can present three-dimensional see-through holographic images to each eye with correct focus cues.

  8. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  9. Multispectral polarization viewing angle analysis of circular polarized stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2010-02-01

    In this paper we propose a method to characterize polarization based stereoscopic 3D displays using multispectral Fourier optics viewing angle measurements. Full polarization analysis of the light emitted by the display in the full viewing cone is made at 31 wavelengths in the visible range. Vertical modulation of the polarization state is observed and explained by the position of the phase shift filter into the display structure. In addition, strong spectral dependence of the ellipticity and polarization degree is observed. These features come from the strong spectral dependence of the phase shift film and introduce some imperfections (color shifts and reduced contrast). Using the measured transmission properties of the two glasses filters, the resulting luminance across each filter is computed for left and right eye views. Monocular contrast for each eye and binocular contrasts are performed in the observer space, and Qualified Monocular and Binocular Viewing Spaces (QMVS and QBVS) can be deduced in the same way as auto-stereoscopic 3D displays allowing direct comparison of the performances.

  10. Characterizing the effects of droplines on target acquisition performance on a 3-D perspective display

    NASA Technical Reports Server (NTRS)

    Liao, Min-Ju; Johnson, Walter W.

    2004-01-01

    The present study investigated the effects of droplines on target acquisition performance on a 3-D perspective display in which participants were required to move a cursor into a target cube as quickly as possible. Participants' performance and coordination strategies were characterized using both Fitts' law and acquisition patterns of the 3 viewer-centered target display dimensions (azimuth, elevation, and range). Participants' movement trajectories were recorded and used to determine movement times for acquisitions of the entire target and of each of its display dimensions. The goodness of fit of the data to a modified Fitts function varied widely among participants, and the presence of droplines did not have observable impacts on the goodness of fit. However, droplines helped participants navigate via straighter paths and particularly benefited range dimension acquisition. A general preference for visually overlapping the target with the cursor prior to capturing the target was found. Potential applications of this research include the design of interactive 3-D perspective displays in which fast and accurate selection and manipulation of content residing at multiple ranges may be a challenge.

  11. Digital holography particle image velocimetry for the measurement of 3D t-3c flows

    NASA Astrophysics Data System (ADS)

    Shen, Gongxin; Wei, Runjie

    2005-10-01

    In this paper a digital in-line holographic recording and reconstruction system was set up and used in the particle image velocimetry for the 3D t-3c (the three-component (3c), velocity vector field measurements in a three-dimensional (3D), space field with time history ( t)) flow measurements that made up of the new full-flow field experimental technique—digital holographic particle image velocimetry (DHPIV). The traditional holographic film was replaced by a CCD chip that records instantaneously the interference fringes directly without the darkroom processing, and the virtual image slices in different positions were reconstructed by computation using Fresnel-Kirchhoff integral method from the digital holographic image. Also a complex field signal filter (analyzing image calculated by its intensity and phase from real and image parts in fast fourier transform (FFT)) was applied in image reconstruction to achieve the thin focus depth of image field that has a strong effect with the vertical velocity component resolution. Using the frame-straddle CCD device techniques, the 3c velocity vector was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with the digital complex field signal filter. Then the 3D-3c-velocity field (about 20 000 vectors), 3D-streamline and 3D-vorticiry fields, and the time evolution movies (30 field/s) for the 3D t-3c flows were displayed by the experimental measurement using this DHPIV method and techniques.

  12. The azimuth projection for the display of 3-D EEG data.

    PubMed

    Wu, Dan; Yao, Dezhong

    2007-12-01

    Electroencephalogram (EEG) is a scalp record of the neural electric activities of the brain. There are many kinds of methods to display the EEG data, such as a projective plane or the realistic head surface. In this work, one of the atlas projection methods, azimuth conformal projection, was tested and recommended as a new way of a planar EEG display. The method details are given and numerically compared with the normal projective plane display. The results indicate that the azimuth projection has many advantages: the transform is simple, convenient, and it can keep all the information. It shows all the information in the 3-D data within a projective plane without distinct shape change. Therefore, it can help to analyze the data effectively.

  13. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  14. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  15. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  16. 3D imaging of the mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Faivre, Michael; Moreels, Guy; Clairemidi, Jacques; Mougin-Sisini, Davy; Meriwether, John W.; Lehmacher, Gerald A.; Vidal, Erick; Veliz, Oskar

    A new and original stereo-imaging method is introduced to measure the altitude of the OH airglow layer and provide a 3D map of the altitude of the layer centroid. Near-IR photographs of the layer are taken at two sites distant of 645 km. Each photograph is processed in order to invert the perspective effect and provide a satellite-type view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized crosscorrelation coefficient. This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12° 09' 08.2" S, 75° 33' 49.3" W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16° 33' 17.6" S, 71° 39' 59.4" W, altitude 2330 m) close to Arequipa. 3D maps of the layer surface are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 87.1 km on July 26 and 89.5 km on July 28. Comparable relief wavy features appear in the 3D and intensity maps.

  17. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  18. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  19. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs.

  20. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  1. Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering

    NASA Astrophysics Data System (ADS)

    Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

    The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

  2. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  3. Validation of 3D ultrasound: CT registration of prostate images

    NASA Astrophysics Data System (ADS)

    Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

    2003-05-01

    All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

  4. High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display.

    PubMed

    Chang, Yu-Cheng; Jen, Tai-Hsiang; Ting, Chih-Hung; Huang, Yi-Pai

    2014-02-10

    A 2D/3D switchable and rotatable autostereoscopic display using a high-resistance liquid-crystal (Hi-R LC) lens array is investigated in this paper. Using high-resistance layers in an LC cell, a gradient electric-field distribution can be formed, which can provide a better lens-like shape of the refractive-index distribution. The advantages of the Hi-R LC lens array are its 2D/3D switchability, rotatability (in the horizontal and vertical directions), low driving voltage (~2 volts) and fast response (~0.6 second). In addition, the Hi-R LC lens array requires only a very simple fabrication process.

  5. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  6. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  7. Sound localization with head movement: implications for 3-d audio displays

    PubMed Central

    McAnally, Ken I.; Martin, Russell L.

    2014-01-01

    Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy. PMID:25161605

  8. Joint calibration of 3D resist image and CDSEM

    NASA Astrophysics Data System (ADS)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  9. Digital acquisition system for high-speed 3-D imaging

    NASA Astrophysics Data System (ADS)

    Yafuso, Eiji

    1997-11-01

    High-speed digital three-dimensional (3-D) imagery is possible using multiple independent charge-coupled device (CCD) cameras with sequentially triggered acquisition and individual field storage capability. The system described here utilizes sixteen independent cameras, providing versatility in configuration and image acquisition. By aligning the cameras in nearly coincident lines-of-sight, a sixteen frame two-dimensional (2-D) sequence can be captured. The delays can be individually adjusted lo yield a greater number of acquired frames during the more rapid segments of the event. Additionally, individual integration periods may be adjusted to ensure adequate radiometric response while minimizing image blur. An alternative alignment and triggering scheme arranges the cameras into two angularly separated banks of eight cameras each. By simultaneously triggering correlated stereo pairs, an eight-frame sequence of stereo images may be captured. In the first alignment scheme the camera lines-of-sight cannot be made precisely coincident. Thus representation of the data as a monocular sequence introduces the issue of independent camera coordinate registration with the real scene. This issue arises more significantly using the stereo pair method to reconstruct quantitative 3-D spatial information of the event as a function of time. The principal development here will be the derivation and evaluation of a solution transform and its inverse for the digital data which will yield a 3-D spatial mapping as a function of time.

  10. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown.

  11. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  12. Validation of image processing tools for 3-D fluorescence microscopy.

    PubMed

    Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric

    2002-04-01

    3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.

  13. Automated spatial alignment of 3D torso images.

    PubMed

    Bose, Arijit; Shah, Shishir K; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2011-01-01

    This paper describes an algorithm for automated spatial alignment of three-dimensional (3D) surface images in order to achieve a pre-defined orientation. Surface images of the torso are acquired from breast cancer patients undergoing reconstructive surgery to facilitate objective evaluation of breast morphology pre-operatively (for treatment planning) and/or post-operatively (for outcome assessment). Based on the viewing angle of the multiple cameras used for stereophotography, the orientation of the acquired torso in the images may vary from the normal upright position. Consequently, when translating this data into a standard 3D framework for visualization and analysis, the co-ordinate geometry differs from the upright position making robust and standardized comparison of images impractical. Moreover, manual manipulation and navigation of images to the desired upright position is subject to user bias. Automating the process of alignment and orientation removes operator bias and permits robust and repeatable adjustment of surface images to a pre-defined or desired spatial geometry.

  14. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  15. On the Uncertain Future of the Volumetric 3D Display Paradigm

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2017-06-01

    Volumetric displays permit electronically processed images to be depicted within a transparent physical volume and enable a range of cues to depth to be inherently associated with image content. Further, images can be viewed directly by multiple simultaneous observers who are able to change vantage positions in a natural way. On the basis of research to date, we assume that the technologies needed to implement useful volumetric displays able to support translucent image formation are available. Consequently, in this paper we review aspects of the volumetric paradigm and identify important issues which have, to date, precluded their successful commercialization. Potentially advantageous characteristics are outlined and demonstrate that significant research is still needed in order to overcome barriers which continue to hamper the effective exploitation of this display modality. Given the recent resurgence of interest in developing commercially viable general purpose volumetric systems, this discussion is of particular relevance.

  16. Adipose tissue-derived stem cells display a proangiogenic phenotype on 3D scaffolds.

    PubMed

    Neofytou, Evgenios A; Chang, Edwin; Patlola, Bhagat; Joubert, Lydia-Marie; Rajadas, Jayakumar; Gambhir, Sanjiv S; Cheng, Zhen; Robbins, Robert C; Beygui, Ramin E

    2011-09-01

    Ischemic heart disease is the leading cause of death worldwide. Recent studies suggest that adipose tissue-derived stem cells (ASCs) can be used as a potential source for cardiovascular tissue engineering due to their ability to differentiate along the cardiovascular lineage and to adopt a proangiogenic phenotype. To understand better ASCs' biology, we used a novel 3D culture device. ASCs' and b.END-3 endothelial cell proliferation, migration, and vessel morphogenesis were significantly enhanced compared to 2D culturing techniques. ASCs were isolated from inguinal fat pads of 6-week-old GFP+/BLI+ mice. Early passage ASCs cells (P3-P4), PKH26-labeled murine b.END-3 cells or a co-culture of ASCs and b.END-3 cells were seeded at a density of 1 × 10(5) on three different surface configurations: (a) a 2D surface of tissue culture plastic, (b) Matrigel, and (c) a highly porous 3D scaffold fabricated from inert polystyrene. VEGF expression, cell proliferation, and tubulization, were assessed using optical microscopy, fluorescence microscopy, 3D confocal microscopy, and SEM imaging (n = 6). Increased VEGF levels were seen in conditioned media harvested from co-cultures of ASCs and b.END-3 on either Matrigel or a 3D matrix. Fluorescence, confocal, SEM, bioluminescence revealed improved cell, proliferation, and tubule formation for cells seeded on the 3D polystyrene matrix. Collectively, these data demonstrate that co-culturing ASCs with endothelial cells in a 3D matrix environment enable us to generate prevascularized tissue-engineered constructs. This can potentially help us to surpass the tissue thickness limitations faced by the tissue engineering community today.

  17. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  18. A hybrid framework for 3D medical image segmentation.

    PubMed

    Chen, Ting; Metaxas, Dimitris

    2005-12-01

    In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework.

  19. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  20. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  1. Design of user interface in medical imaging: lessons of 3-D application definition

    NASA Astrophysics Data System (ADS)

    Jannin, Pierre; Mevel, G.; Gandon, Yves; Cordonnier, Emmanuel

    1992-05-01

    Modern dedicated image processing workstations and even general purpose computers offer enhanced user interface capabilities. Hardware management of the user interface allows a fast, easy, and powerful dialogue between man and machine. The application design must take into account these new possibilities in order to make optimal use of the hardware. Physicians are special users in that they need to customize their working environment to carry out specific tasks. Specific medical applications in the area of 3-D display and multimodality imaging need to accommodate a sequential organization of the physician's tasks, access to the various tools (image processing features, 3-D display, environment configuration, etc. ...) and the powerful dedicated workstations the physician may require. This paper sets out a number of general rules applicable to user interface design and defines the specific features of medical imaging brought into play in the definition of the environment we have developed for medical imaging user interface design. Examples in 2-D and 3-D display mode are presented.

  2. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  3. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  4. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  5. Quantification of thyroid volume using 3-D ultrasound imaging.

    PubMed

    Kollorz, E K; Hahn, D A; Linke, R; Goecke, T W; Hornegger, J; Kuwert, T

    2008-04-01

    Ultrasound (US) is among the most popular diagnostic techniques today. It is non-invasive, fast, comparably cheap, and does not require ionizing radiation. US is commonly used to examine the size, and structure of the thyroid gland. In clinical routine, thyroid imaging is usually performed by means of 2-D US. Conventional approaches for measuring the volume of the thyroid gland or its nodules may therefore be inaccurate due to the lack of 3-D information. This work reports a semi-automatic segmentation approach for the classification, and analysis of the thyroid gland based on 3-D US data. The images are scanned in 3-D, pre-processed, and segmented. Several pre-processing methods, and an extension of a commonly used geodesic active contour level set formulation are discussed in detail. The results obtained by this approach are compared to manual interactive segmentations by a medical expert in five representative patients. Our work proposes a novel framework for the volumetric quantification of thyroid gland lobes, which may also be expanded to other parenchymatous organs.

  6. Spatial orientation in 3-D desktop displays: using rooms for organizing information.

    PubMed

    Colle, Herbert A; Reid, Gary B

    2003-01-01

    Understanding how spatial knowledge is acquired is important for spatial navigation and for improving the design of 3-D perspective interfaces. Configural spatial knowledge of object locations inside rooms is learned rapidly and easily (Colle & Reid, 1998), possibly because rooms afford local viewing in which objects are directly viewed or, alternatively, because of their structural features. The local viewing hypothesis predicts that the layout of objects outside of rooms also should be rapidly acquired when walls are removed and rooms are sufficiently close that participants can directly view and identify objects. It was evaluated using pointing and sketch map measures of configural knowledge with and without walls by varying distance, lighting levels, and observation instructions. Although within-room spatial knowledge was uniformly good, local viewing was not sufficient for improving spatial knowledge of objects in different rooms. Implications for navigation and 3-D interface design are discussed. Actual or potential applications of this research include the design of user interfaces, especially interfaces with 3-D displays.

  7. 3D imaging of biological specimen using MS.

    PubMed

    Fletcher, John S

    2015-01-01

    Imaging MS can provide unique information about the distribution of native and non-native compounds in biological specimen. MALDI MS and secondary ion MS are the two most commonly applied imaging MS techniques and can provide complementary information about a sample. MALDI offers access to high mass species such as proteins while secondary ion MS can operate at higher spatial resolution and provide information about lower mass species including elemental signals. Imaging MS is not limited to two dimensions and different approaches have been developed that allow 3D molecular images to be generated of chemicals in whole organs down to single cells. Resolution in the z-dimension is often higher than in x and y, so such analysis offers the potential for probing the distribution of drug molecules and studying drug action by MS with a much higher precision - possibly even organelle level.

  8. 3D Gabor wavelet based vessel filtering of photoacoustic images.

    PubMed

    Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi

    2016-08-01

    Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.

  9. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  10. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  11. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  12. Real-time, high-accuracy 3D imaging and shape measurement.

    PubMed

    Nguyen, Hieu; Nguyen, Dung; Wang, Zhaoyang; Kieu, Hien; Le, Minh

    2015-01-01

    In spite of the recent advances in 3D shape measurement and geometry reconstruction, simultaneously achieving fast-speed and high-accuracy performance remains a big challenge in practice. In this paper, a 3D imaging and shape measurement system is presented to tackle such a challenge. The fringe-projection-profilometry-based system employs a number of advanced approaches, such as: composition of phase-shifted fringe patterns, externally triggered synchronization of system components, generalized system setup, ultrafast phase-unwrapping algorithm, flexible system calibration method, robust gamma correction scheme, multithread computation and processing, and graphics-processing-unit-based image display. Experiments have shown that the proposed system can acquire and display high-quality 3D reconstructed images and/or video stream at a speed of 45 frames per second with relative accuracy of 0.04% or at a reduced speed of 22.5 frames per second with enhanced accuracy of 0.01%. The 3D imaging and shape measurement system shows great promise of satisfying the ever-increasing demands of scientific and engineering applications.

  13. Volumetric display system based on three-dimensional scanning of inclined optical image.

    PubMed

    Miyazaki, Daisuke; Shiba, Kensuke; Sotsuka, Koji; Matsushita, Kenji

    2006-12-25

    A volumetric display system based on three-dimensional (3D) scanning of an inclined image is reported. An optical image of a two-dimensional (2D) display, which is a vector-scan display monitor placed obliquely in an optical imaging system, is moved laterally by a galvanometric mirror scanner. Inclined cross-sectional images of a 3D object are displayed on the 2D display in accordance with the position of the image plane to form a 3D image. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision because they are real images formed in a 3D space. Experimental results of volumetric imaging from computed-tomography images and 3D animated images are presented.

  14. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements <2 mm) and precision (mean point to plane error <2 mm) at an average resolution of at least 390 points per cm(2). Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p < 0.001). The choice of object color can influence measurement range and precision. Although Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  15. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  16. Mitral valve analysis using a novel 3D holographic display: a feasibility study of 3D ultrasound data converted to a holographic screen.

    PubMed

    Beitnes, Jan Otto; Klæboe, Lars Gunnar; Karlsen, Jørn Skaarud; Urheim, Stig

    2015-02-01

    The aim of the present study was to test the feasibility of analyzing 3D ultrasound data on a novel holographic display. An increasing number of mini-invasive procedures for mitral valve repair require more effective visualization to improve patient safety and speed of procedures. A novel 3D holographic display has been developed and may have the potential to guide interventional cardiac procedures in the near future. Forty patients with degenerative mitral valve disease were analyzed. All had complete 2D transthoracic (TTE) and transoesophageal (TEE) echocardiographic examinations. In addition, 3D TTE of the mitral valve was obtained and recordings were converted from the echo machine to the holographic screen. Visual inspection of the mitral valve during surgery or TEE served as the gold standard. 240 segments were analyzed by 2 independent observers. A total of 53 segments were prolapsing. The majority included P2 (31), the remaining located at A2 (8), A3 (6), P3 (5), P1 (2) and A1 (1). The sensitivity and specificity of the 3D display was 87 and 99 %, respectively (observer I), and for observer II 85 and 97 %, respectively. The accuracies and precisions were 96.7 and 97.9 %, respectively, (observer I), 94.3 and 88.2 % (observer II), and inter-observer agreement was 0.954 with Cohen's Kappa 0.86. We were able to convert 3D ultrasound data to the holographic display. A very high accuracy and precision was shown, demonstrating the feasibility of analyzing 3D echo of the mitral valve on the holographic screen.

  17. Image Descriptors for Displays

    DTIC Science & Technology

    1977-02-01

    information. In Section V of the report, however, we have extended our descriptor for the total channel capacity of a display to include both chromi - nance and...frequency and for constant chromi - nance. The quantities nl(w) represent the number of perceivable colors for a given spatial frequancy and luminance value...the chromi - nance contribution to the total channel capacity, we shall utilize a linear model for thot distribution of perceived chrominance levels. We

  18. Image Descriptors for Displays

    DTIC Science & Technology

    1975-03-01

    gain an insight into the detailed mechanisms of aliasing, but it does not predict how important aliasing is. Our statistical approach predicts the...undersampled limit has a maximum edge discrimination ability equivalent to an analog display with a flat pass- band and limiting resolution given by... discrimination ability of the observer is proportional to the statistical average of a quantity that is representative of the perceived information content

  19. Residual lens effects in 2D mode of auto-stereoscopic lenticular-based switchable 2D/3D displays

    NASA Astrophysics Data System (ADS)

    Sluijter, M.; IJzerman, W. L.; de Boer, D. K. G.; de Zwart, S. T.

    2006-04-01

    We discuss residual lens effects in multi-view switchable auto-stereoscopic lenticular-based 2D/3D displays. With the introduction of a switchable lenticular, it is possible to switch between a 2D mode and a 3D mode. The 2D mode displays conventional content, whereas the 3D mode provides the sensation of depth to the viewer. The uniformity of a display in the 2D mode is quantified by the quality parameter modulation depth. In order to reduce the modulation depth in the 2D mode, birefringent lens plates are investigated analytically and numerically, by ray tracing. We can conclude that the modulation depth in the 2D mode can be substantially decreased by using birefringent lens plates with a perfect index match between lens material and lens plate. Birefringent lens plates do not disturb the 3D performance of a switchable 2D/3D display.

  20. 3D Imaging of the OH mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Kouahla, M. N.; Moreels, G.; Faivre, M.; Clairemidi, J.; Meriwether, J. W.; Lehmacher, G. A.; Vidal, E.; Veliz, O.

    2010-01-01

    A new and original stereo imaging method is introduced to measure the altitude of the OH nightglow layer and provide a 3D perspective map of the altitude of the layer centroid. Near-IR photographs of the OH layer are taken at two sites separated by a 645 km distance. Each photograph is processed in order to provide a satellite view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized cross-correlation coefficient (NCC). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12°09‧08.2″ S, 75°33‧49.3″ W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16°33‧17.6″ S, 71°39‧59.4″ W, altitude 2272 m) close to Arequipa. 3D maps of the layer surface were retrieved and compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 86.3 km on July 26. Comparable relief wavy features appear in the 3D and intensity maps. It is shown that the vertical amplitude of the wave system varies as exp (Δz/2H) within the altitude range Δz = 83.5-88.0 km, H being the scale height. The oscillatory kinetic energy at the altitude of the OH layer is comprised between 3 × 10-4 and 5.4 × 10-4 J/m3, which is 2-3 times smaller than the values derived from partial radio wave at 52°N latitude.

  1. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  2. Holographic display system for dynamic synthesis of 3D light fields with increased space bandwidth product.

    PubMed

    Agour, Mostafa; Falldorf, Claas; Bergmann, Ralf B

    2016-06-27

    We present a new method for the generation of a dynamic wave field with high space bandwidth product (SBP). The dynamic wave field is generated from several wave fields diffracted by a display which comprises multiple spatial light modulators (SLMs) each having a comparably low SBP. In contrast to similar approaches in stereoscopy, we describe how the independently generated wave fields can be coherently superposed. A major benefit of the scheme is that the display system may be extended to provide an even larger display. A compact experimental configuration which is composed of four phase-only SLMs to realize the coherent combination of independent wave fields is presented. Effects of important technical parameters of the display system on the wave field generated across the observation plane are investigated. These effects include, e.g., the tilt of the individual SLM and the gap between the active areas of multiple SLMs. As an example of application, holographic reconstruction of a 3D object with parallax effects is demonstrated.

  3. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the

  4. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  5. 3D geometry-based quantification of colocalizations in multichannel 3D microscopy images of human soft tissue tumors.

    PubMed

    Wörz, Stefan; Sander, Petra; Pfannmöller, Martin; Rieker, Ralf J; Joos, Stefan; Mechtersheimer, Gunhild; Boukamp, Petra; Lichter, Peter; Rohr, Karl

    2010-08-01

    We introduce a new model-based approach for automatic quantification of colocalizations in multichannel 3D microscopy images. The approach uses different 3D parametric intensity models in conjunction with a model fitting scheme to localize and quantify subcellular structures with high accuracy. The central idea is to determine colocalizations between different channels based on the estimated geometry of the subcellular structures as well as to differentiate between different types of colocalizations. A statistical analysis was performed to assess the significance of the determined colocalizations. This approach was used to successfully analyze about 500 three-channel 3D microscopy images of human soft tissue tumors and controls.

  6. Three-dimensional display modes for CT colonography: conventional 3D virtual colonoscopy versus unfolded cube projection.

    PubMed

    Vos, Frans M; van Gelder, Rogier E; Serlie, Iwo W O; Florie, Jasper; Nio, C Yung; Glas, Afina S; Post, Frits H; Truyen, Roel; Gerritsen, Frans A; Stoker, Jaap

    2003-09-01

    The authors compared a conventional two-directional three-dimensional (3D) display for computed tomography (CT) colonography with an alternative method they developed on the basis of time efficiency and surface visibility. With the conventional technique, 3D ante- and retrograde cine loops were obtained (hereafter, conventional 3D). With the alternative method, six projections were obtained at 90 degrees viewing angles (unfolded cube display). Mean evaluation time per patient with the conventional 3D display was significantly longer than that with the unfolded cube display. With the conventional 3D method, 93.8% of the colon surface came into view; with the unfolded cube method, 99.5% of the colon surface came into view. Sensitivity and specificity were not significantly different between the two methods. Agreements between observers were kappa = 0.605 for conventional 3D display and kappa = 0.692 for unfolded cube display. Consequently, the latter method enhances the 3D endoluminal display with improved time efficiency and higher surface visibility.

  7. Real Image Visual Display System

    DTIC Science & Technology

    1992-12-01

    DTI-100M autostereoscopic display ......................... 15 8. Lenticular screen ........ ............................. 16 9. Lenticular screen...the left eye receives the other. The brain then combines the two images into a three-dimensional volume. Autostereoscopic imaging provides separate...a computer screen. Next, several techniques for creating three-dimensional images are presented. 5 These methods focus primarily on autostereoscopic

  8. Three-dimensional integral imaging display system via off-axially distributed image sensing

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Qu, Hongjia; Zhang, Miao; Cho, Myungjin

    2016-10-01

    In this paper, we propose a three-dimensional integral imaging display system with a multiple recorded images using off-axially distributed image sensing. First, the depth map of the 3D objects is extracted from the off-axially recorded multi-perspective 2D images by using profilometry technique. Then, the elemental image array is computationally synthesized using the extracted depth map based on ray mapping model. Finally, the 3D images are optically displayed in integral imaging system. To show the feasibility of the proposed method, the optical experiments for 3D objects are carried out and presented in this paper.

  9. Image segmentation to inspect 3-D object sizes

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Pin; Fuh, Chiou-Shann

    1996-01-01

    Object size inspection is an important task and has various applications in computer vision. For example, the automatic control of stone-breaking machines, which perform better if the sizes of the stones to be broken can be predicted. An algorithm is proposed for image segmentation in size inspection for almost round stones with high or low texture. Although our experiments are focused on stones, the algorithm can be applied to other 3-D objects. We use one fixed camera and four light sources at four different positions one at a time, to take four images. Then we compute the image differences and binarize them to extract edges. We explain, step by step, the photographing, the edge extraction, the noise removal, and the edge gap filling. Experimental results are presented.

  10. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields.

  11. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  12. 3-D imaging and illustration of mouse intestinal neurovascular complex.

    PubMed

    Fu, Ya-Yuan; Peng, Shih-Jung; Lin, Hsin-Yao; Pasricha, Pankaj J; Tang, Shiue-Cheng

    2013-01-01

    Because of the dispersed nature of nerves and blood vessels, standard histology cannot provide a global and associated observation of the enteric nervous system (ENS) and vascular network. We prepared transparent mouse intestine and combined vessel painting and three-dimensional (3-D) neurohistology for joint visualization of the ENS and vasculature. Cardiac perfusion of the fluorescent wheat germ agglutinin (vessel painting) was used to label the ileal blood vessels. The pan-neuronal marker PGP9.5, sympathetic neuronal marker tyrosine hydroxylase (TH), serotonin, and glial markers S100B and GFAP were used as the immunostaining targets of neural tissues. The fluorescently labeled specimens were immersed in the optical clearing solution to improve photon penetration for 3-D confocal microscopy. Notably, we simultaneously revealed the ileal microstructure, vasculature, and innervation with micrometer-level resolution. Four examples are given: 1) the morphology of the TH-labeled sympathetic nerves: sparse in epithelium, perivascular at the submucosa, and intraganglionic at myenteric plexus; 2) distinct patterns of the extrinsic perivascular and intrinsic pericryptic innervation at the submucosal-mucosal interface; 3) different associations of serotonin cells with the mucosal neurovascular elements in the villi and crypts; and 4) the periganglionic capillary network at the myenteric plexus and its contact with glial fibers. Our 3-D imaging approach provides a useful tool to simultaneously reveal the nerves and blood vessels in a space continuum for panoramic illustration and analysis of the neurovascular complex to better understand the intestinal physiology and diseases.

  13. Effective classification of 3D image data using partitioning methods

    NASA Astrophysics Data System (ADS)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  14. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  15. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  16. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the

  17. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  18. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  19. 3-D Imaging of Partly Concealed Targets by Laser Radar

    DTIC Science & Technology

    2005-10-01

    laser in the green wavelength region was used for illumination. 3-D Imaging of Partly Concealed Targets by Laser Radar 11 - 8 RTO-MP-SET-094...acknowledge Marie Carlsson and Ann Charlotte Gustavsson for their assistance in some of the experiments. 7.0 REFERENCES [1] U. Söderman, S. Ahlberg...SPIE Vol. 3707, pp. 432-448, USA, 1999. [14] D. Letalick, H. Larsson, M. Carlsson, and A.-C. Gustavsson , “Laser sensors for urban warfare,” FOI

  20. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  1. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    NASA Astrophysics Data System (ADS)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  2. Automated simulation and evaluation of autostereoscopic multiview 3D display designs by time-sequential and wavelength-selective filter barrier

    NASA Astrophysics Data System (ADS)

    Kuhlmey, Mathias; Jurk, Silvio; Duckstein, Bernd; de la Barré, René

    2015-09-01

    A novel simulation tool has been developed for spatial multiplexed 3D displays. Main purpose of our software is the 3D display design with optical image splitter in particular lenticular grids or wavelength-selective barriers. As a result of interaction of image splitter with ray emitting displays a spatial light-modulator generating the autostereoscopic image representation was modeled. Based on the simulation model the interaction of optoelectronic devices with the defined spatial planes is described. Time-sequential multiplexing enables increasing the resolution of such 3D displays. On that reason the program was extended with an intermediate data cumulating component. The simulation program represents a stepwise quasi-static functionality and control of the arrangement. It calculates and renders the whole display ray emission and luminance distribution on viewing distance. The degree of result complexity will increase by using wavelength-selective barriers. Visible images at the viewer's eye positon were determined by simulation after every switching operation of optical image splitter. The summation and evaluation of the resulting data is processed in correspondence to the equivalent time sequence. Hereby the simulation was expanded by a complex algorithm for automated search and validation of possible solutions in the multi-dimensional parameter space. For the multiview 3D display design a combination of ray-tracing and 3D rendering was used. Therefore the emitted light intensity distribution of each subpixel will be evaluated by researching in terms of color, luminance and visible area by using different content distribution on subpixel plane. The analysis of the accumulated data will deliver different solutions distinguished by standards of evaluation.

  3. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and

  4. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  5. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  6. Imaging Shallow Salt With 3D Refraction Migration

    NASA Astrophysics Data System (ADS)

    Vanschuyver, C. J.; Hilterman, F. J.

    2005-05-01

    In offshore West Africa, numerous salt walls are within 200 m of sea level. Because of the shallowness of these salt walls, reflections from the salt top can be difficult to map, making it impossible to build an accurate velocity model for subsequent pre-stack depth migration. An accurate definition of salt boundaries is critical to any depth model where salt is present. Unfortunately, when a salt body is very shallow, the reflection from the upper interface can be obscured due to large offsets between the source and near receivers and also due to the interference from multiples and other near-surface noise events. A new method is described using 3D migration of the refraction waveforms which is simplified because of several constraints in the model definition. The azimuth and dip of the refractor is found by imaging with Kirchhoff theory. A Kirchhoff migration is performed where the traveltime values are adjusted to use the CMP refraction traveltime equation. I assume the sediment and salt velocities to be known such that once the image time is specified, then the dip and azimuth of the refraction path can be found. The resulting 3D refraction migrations are in excellent depth agreement with available well control. In addition, the refraction migration time picks of deeper salt events are in agreement with time picks of the same events on the reflection migration.

  7. 3-D visualization and animation technologies in anatomical imaging.

    PubMed

    McGhee, John

    2010-02-01

    This paper explores a 3-D computer artist's approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation.

  8. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  9. 3-D Imaging and Simulation for Nephron Sparing Surgical Training.

    PubMed

    Ahmadi, Hamed; Liu, Jen-Jane

    2016-08-01

    Minimally invasive partial nephrectomy (MIPN) is now considered the procedure of choice for small renal masses largely based on functional advantages over traditional open surgery. Lack of haptic feedback, the need for spatial understanding of tumor borders, and advanced operative techniques to minimize ischemia time or achieve zero-ischemia PN are among factors that make MIPN a technically demanding operation with a steep learning curve for inexperienced surgeons. Surgical simulation has emerged as a useful training adjunct in residency programs to facilitate the acquisition of these complex operative skills in the setting of restricted work hours and limited operating room time and autonomy. However, the majority of available surgical simulators focus on basic surgical skills, and procedure-specific simulation is needed for optimal surgical training. Advances in 3-dimensional (3-D) imaging have also enhanced the surgeon's ability to localize tumors intraoperatively. This article focuses on recent procedure-specific simulation models for laparoscopic and robotic-assisted PN and advanced 3-D imaging techniques as part of pre- and some cases, intraoperative surgical planning.

  10. Experiments on terahertz 3D scanning microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  11. Cylindrical liquid crystal lenses system for autostereoscopic 2D/3D display

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Wei; Huang, Yi-Pai; Chang, Yu-Cheng; Wang, Po-Hao; Chen, Po-Chuan; Tsai, Chao-Hsu

    2012-06-01

    The liquid crystal lenses system, which could be electrically controlled easily for autostereoscopic 2D/3D switchable display was proposed. The High-Resistance liquid crystal (HRLC) lens utilized less controlled electrodes and coated a high-resistance layer between the controlled-electrodes was proposed and was used in this paper. Compare with the traditional LC lens, the HR-LC Lens could provide smooth electric-potential distribution within the LC layer under driving status. Hence, the proposed HR-LC Lens had less circuit complexity, low driving voltage, and good optical performance also could be obtained. In addition, combining with the proposed driving method called dual-directional overdriving method, the above method could reduce the switching time by applying large voltage onto cell. Consequently, the total switching time could be further reduced to around 2second. It is believed that the LC lens system has high potential in the future.

  12. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  13. Abdominal aortic aneurysm imaging with 3-D ultrasound: 3-D-based maximum diameter measurement and volume quantification.

    PubMed

    Long, A; Rouet, L; Debreuve, A; Ardon, R; Barbe, C; Becquemin, J P; Allaire, E

    2013-08-01

    The clinical reliability of 3-D ultrasound imaging (3-DUS) in quantification of abdominal aortic aneurysm (AAA) was evaluated. B-mode and 3-DUS images of AAAs were acquired for 42 patients. AAAs were segmented. A 3-D-based maximum diameter (Max3-D) and partial volume (Vol30) were defined and quantified. Comparisons between 2-D (Max2-D) and 3-D diameters and between orthogonal acquisitions were performed. Intra- and inter-observer reproducibility was evaluated. Intra- and inter-observer coefficients of repeatability (CRs) were less than 5.18 mm for Max3-D. Intra-observer and inter-observer CRs were respectively less than 6.16 and 8.71 mL for Vol30. The mean of normalized errors of Vol30 was around 7%. Correlation between Max2-D and Max3-D was 0.988 (p < 0.0001). Max3-D and Vol30 were not influenced by a probe rotation of 90°. Use of 3-DUS to quantify AAA is a new approach in clinical practice. The present study proposed and evaluated dedicated parameters. Their reproducibility makes the technique clinically reliable.

  14. Assessment of 3D Viewers for the Display of Interactive Documents in the Learning of Graphic Engineering

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia

    2012-01-01

    The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…

  15. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  16. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  17. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  18. View generation for 3D-TV using image reconstruction from irregularly spaced samples

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos

    2007-02-01

    Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).

  19. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the

  20. Complex Resistivity 3D Imaging for Ground Reinforcement Site

    NASA Astrophysics Data System (ADS)

    Son, J.; Kim, J.; Park, S.

    2012-12-01

    Induced polarization (IP) method is used for mineral exploration and generally classified into two categories, time and frequency domain method. IP method in frequency domain measures amplitude and absolute phase to the transmitted currents, and is often called spectral induced polarization (SIP) when measurement is made for the wide-band frequencies. Our research group has been studying the modeling and inversion algorithms of complex resistivity method since several years ago and recently started to apply this method for various field applications. We already completed the development of 2/3D modeling and inversion program and developing another algorithm to use wide-band data altogether. Until now complex resistivity (CR) method was mainly used for the surface or tomographic survey of mineral exploration. Through the experience, we can find that the resistivity section from CR method is very similar with that of conventional resistivity method. Interpretation of the phase section is generally well matched with the geological information of survey area. But because most of survey area has very touch and complex terrain, 2D survey and interpretation are used generally. In this study, the case study of 3D CR survey conducted for the site where ground reinforcement was done to prevent the subsidence will be introduced. Data was acquired with the Zeta system, the complex resistivity measurement system produced by Zonge Co. using 8 frequencies from 0.125 to 16 Hz. 2D survey was conducted for total 6 lines with 5 m dipole spacing and 20 electrodes. Line length is 95 meter for every line. Among these 8 frequency data, data below 1 Hz was used considering its quality. With the 6 line data, 3D inversion was conducted. Firstly 2D interpretation was made with acquired data and its results were compared with those of resistivity survey. Resulting resistivity image sections of CR and resistivity method were very similar. Anomalies in phase image section showed good agreement

  1. High Time Resolution Photon Counting 3D Imaging Sensors

    NASA Astrophysics Data System (ADS)

    Siegmund, O.; Ertley, C.; Vallerga, J.

    2016-09-01

    Novel sealed tube microchannel plate (MCP) detectors using next generation cross strip (XS) anode readouts and high performance electronics have been developed to provide photon counting imaging sensors for Astronomy and high time resolution 3D remote sensing. 18 mm aperture sealed tubes with MCPs and high efficiency Super-GenII or GaAs photocathodes have been implemented to access the visible/NIR regimes for ground based research, astronomical and space sensing applications. The cross strip anode readouts in combination with PXS-II high speed event processing electronics can process high single photon counting event rates at >5 MHz ( 80 ns dead-time per event), and time stamp events to better than 25 ps. Furthermore, we are developing a high speed ASIC version of the electronics for low power/low mass spaceflight applications. For a GaAs tube the peak quantum efficiency has degraded from 30% (at 560 - 850 nm) to 25% over 4 years, but for Super-GenII tubes the peak quantum efficiency of 17% (peak at 550 nm) has remained unchanged for over 7 years. The Super-GenII tubes have a uniform spatial resolution of <30 μm FWHM ( 1 x106 gain) and single event timing resolution of 100 ps (FWHM). The relatively low MCP gain photon counting operation also permits longer overall sensor lifetimes and high local counting rates. Using the high timing resolution, we have demonstrated 3D object imaging with laser pulse (630 nm 45 ps jitter Pilas laser) reflections in single photon counting mode with spatial and depth sensitivity of the order of a few millimeters. A 50 mm Planacon sealed tube was also constructed, using atomic layer deposited microchannel plates which potentially offer better overall sealed tube lifetime, quantum efficiency and gain stability. This tube achieves standard bialkali quantum efficiency levels, is stable, and has been coupled to the PXS-II electronics and used to detect and image fast laser pulse signals.

  2. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  3. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  4. Fast 3-d tomographic microwave imaging for breast cancer detection.

    PubMed

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring.

  5. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  6. Depth-expression characteristics of multi-projection 3D display systems [invited].

    PubMed

    Park, Soon-gi; Hong, Jong-Young; Lee, Chang-Kun; Miranda, Matheus; Kim, Youngmin; Lee, Byoungho

    2014-09-20

    A multi-projection display consists of multiple projection units. Because of the large amount of data, a multi-projection system shows large, high-quality images. According to the projection geometry and the optical configuration, multi-projection systems show different viewing characteristics for generated three-dimensional images. In this paper, we analyzed the various projection geometries of multi-projection systems, and explained the different depth-expression characteristics for each individual projection geometry. We also demonstrated the depth-expression characteristic of an experimental multi-projection system.

  7. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  8. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    NASA Astrophysics Data System (ADS)

    Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

    2011-09-01

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  9. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  10. A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images

    PubMed Central

    Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade

    2016-01-01

    Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images. PMID:27621874

  11. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  12. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright

    PubMed Central

    Kane, David; Held, Robert T.; Banks, Martin S.

    2012-01-01

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer’s inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer’s head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer’s head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays. PMID:24058723

  13. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright.

    PubMed

    Kane, David; Held, Robert T; Banks, Martin S

    2012-02-09

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer's inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer's head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer's head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays.

  14. Quantitative Multiscale Cell Imaging in Controlled 3D Microenvironments

    PubMed Central

    Welf, Erik S.; Driscoll, Meghan K.; Dean, Kevin M.; Schäfer, Claudia; Chu, Jun; Davidson, Michael W.; Lin, Michael Z.; Danuser, Gaudenz; Fiolka, Reto

    2016-01-01

    The microenvironment determines cell behavior, but the underlying molecular mechanisms are poorly understood because quantitative studies of cell signaling and behavior have been challenging due to insufficient spatial and/or temporal resolution and limitations on microenvironmental control. Here we introduce microenvironmental selective plane illumination microscopy (meSPIM) for imaging and quantification of intracellular signaling and submicrometer cellular structures as well as large-scale cell morphological and environmental features. We demonstrate the utility of this approach by showing that the mechanical properties of the microenvironment regulate the transition of melanoma cells from actin-driven protrusion to blebbing, and we present tools to quantify how cells manipulate individual collagen fibers. We leverage the nearly isotropic resolution of meSPIM to quantify the local concentration of actin and phosphatidylinositol 3-kinase signaling on the surfaces of cells deep within 3D collagen matrices and track the many small membrane protrusions that appear in these more physiologically relevant environments. PMID:26906741

  15. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    NASA Astrophysics Data System (ADS)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  16. 3D x-ray reconstruction using lightfield imaging

    NASA Astrophysics Data System (ADS)

    Saha, Sajib; Tahtali, Murat; Lambert, Andrew; Pickering, Mark R.

    2014-09-01

    Existing Computed Tomography (CT) systems require full 360° rotation projections. Using the principles of lightfield imaging, only 4 projections under ideal conditions can be sufficient when the object is illuminated with multiple-point Xray sources. The concept was presented in a previous work with synthetically sampled data from a synthetic phantom. Application to real data requires precise calibration of the physical set up. This current work presents the calibration procedures along with experimental findings for the reconstruction of a physical 3D phantom consisting of simple geometric shapes. The crucial part of this process is to determine the effective distances of the X-ray paths, which are not possible or very difficult by direct measurements. Instead, they are calculated by tracking the positions of fiducial markers under prescribed source and object movements. Iterative algorithms are used for the reconstruction. Customized backprojection is used to ensure better initial guess for the iterative algorithms to start with.

  17. 3D imaging of semiconductor components by discrete laminography

    NASA Astrophysics Data System (ADS)

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  18. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  19. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-06

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined.

  20. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  1. TransCAIP: A Live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters.

    PubMed

    Taguchi, Yuichi; Koike, Takafumi; Takahashi, Keita; Naemura, Takeshi

    2009-01-01

    The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.

  2. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  3. Using 3-D OFEM for movement correction and quantitative evaluation in dynamic cardiac NH3 PET images

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yang, Bang-Hung; Chen, Chih-Hao; Wu, Liang-Chih; Liu, Ren-Shyan; Chung, Being-Tau; Lin, Kang-Ping

    2005-04-01

    Various forms of cardiac pathology, such as myocardial ischemia and infarction, can be characterized with 13NH3-PET images. In clinical situation, polar map (bullseye image), which derived by combining images from multiple planes (designated by the circle around the myocardium in the above images), so that information of the entire myocardium can be displayed in a single image for diagnosis. However, image artifact problem always arises from body movement or breathing motion in image acquisition period and results in indefinite myocardium disorder region shown in bullseye image. In this study, a 3-D motion and movement correction method is developed to solve the image artifact problem to improve the accuracy of diagnostic bullseye image. The proposed method is based on 3-D optical flow estimation method (OFEM) and cooperates with the particular dynamic imaging protocol, which snaps serial PET images (5 frames) in later half imaging period. The 3-D OFEM assigns to each image point in the visual 3-D flow velocity field, which associates with the non-rigid motion of the time-varying brightness of a sequence of images. It presents vectors of corresponding images position between frames for motion correction. To validate the performance of proposed method, 10 normal and 20 abnormal whole-body dynamic PET imaging studies were applied, and the results show that the bullseye images, which generated by corrected images, present clear and definite tissue region for clinical diagnosis.

  4. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  5. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  6. Can the perception of depth in stereoscopic images be influenced by 3D sound?

    NASA Astrophysics Data System (ADS)

    Turner, Amy; Berry, Jonathan; Holliman, Nick

    2011-03-01

    The creation of binocular images for stereoscopic display has benefited from significant research and commercial development in recent years. However, perhaps surprisingly, the effect of adding 3D sound to stereoscopic images has rarely been studied. If auditory depth information can enhance or extend the visual depth experience it could become an important way to extend the limited depth budget on all 3D displays and reduce the potential for fatigue from excessive use of disparity. Objective: As there is limited research in this area our objective was to ask two preliminary questions. First what is the smallest difference in forward depth that can be reliably detected using 3D sound alone? Second does the addition of auditory depth information influence the visual perception of depth in a stereoscopic image? Method: To investigate auditory depth cues we use a simple sound system to test the experimental hypothesis that: participants will perform better than chance at judging the depth differences between two speakers a set distance apart. In our second experiment investigating both auditory and visual depth cues we setup a sound system and a stereoscopic display to test the experimental hypothesis that: participants judge a visual stimulus to be closer if they hear a closer sound when viewing the stimulus. Results: In the auditory depth cue trial every depth difference tested gave significant results demonstrating that the human ear can hear depth differences between physical sources as short as 0.25 m at 1 m. In our trial investigating whether audio information can influence the visual perception of depth we found that participants did report visually perceiving an object to be closer when the sound was played closer to them even though the image depth remained unchanged. Conclusion: The positive results in the two trials show that we can hear small differences in forward depth between sound sources and suggest that it could be practical to extend the apparent

  7. Clinical Application of Solid Model Based on Trabecular Tibia Bone CT Images Created by 3D Printer

    PubMed Central

    Cho, Jaemo; Park, Chan-Soo; Kim, Yeoun-Jae

    2015-01-01

    Objectives The aim of this work is to use a 3D solid model to predict the mechanical loads of human bone fracture risk associated with bone disease conditions according to biomechanical engineering parameters. Methods We used special image processing tools for image segmentation and three-dimensional (3D) reconstruction to generate meshes, which are necessary for the production of a solid model with a 3D printer from computed tomography (CT) images of the human tibia's trabecular and cortical bones. We examined the defects of the mechanism for the tibia's trabecular bones. Results Image processing tools and segmentation techniques were used to analyze bone structures and produce a solid model with a 3D printer. Conclusions These days, bio-imaging (CT and magnetic resonance imaging) devices are able to display and reconstruct 3D anatomical details, and diagnostics are becoming increasingly vital to the quality of patient treatment planning and clinical treatment. Furthermore, radiographic images are being used to study biomechanical systems with several aims, namely, to describe and simulate the mechanical behavior of certain anatomical systems, to analyze pathological bone conditions, to study tissues structure and properties, and to create a solid model using a 3D printer to support surgical planning and reduce experimental costs. These days, research using image processing tools and segmentation techniques to analyze bone structures to produce a solid model with a 3D printer is rapidly becoming very important. PMID:26279958

  8. A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

    PubMed Central

    Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

    2014-01-01

    We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

  9. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  10. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  11. Computer acquisition of 3D images utilizing dynamic speckles

    NASA Astrophysics Data System (ADS)

    Kamshilin, Alexei A.; Semenov, Dmitry V.; Nippolainen, Ervin; Raita, Erik

    2006-05-01

    We present novel technique for fast non-contact and continuous profile measurements of rough surfaces by use of dynamic speckles. The dynamic speckle pattern is generated when the laser beam scans the surface under study. The most impressive feature of the proposed technique is its ability to work at extremely high scanning speed of hundreds meters per second. The technique is based on the continuous frequency measurements of the light-power modulation after spatial filtering of the scattered light. The complete optical-electronic system was designed and fabricated for fast measurement of the speckles velocity, its recalculation into the distance, and further data acquisition into computer. The measured surface profile is displayed in a PC monitor in real time. The response time of the measuring system is below 1 μs. Important parameters of the system such as accuracy, range of measurements, and spatial resolution are analyzed. Limits of the spatial filtering technique used for continuous tracking of the speckle-pattern velocity are shown. Possible ways of further improvement of the measurements accuracy are demonstrated. Owing to its extremely fast operation, the proposed technique could be applied for online control of the 3D-shape of complex objects (e.g., electronic circuits) during their assembling.

  12. 3D Sorghum Reconstructions from Depth Images Identify QTL Regulating Shoot Architecture1[OPEN

    PubMed Central

    2016-01-01

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height, leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits. PMID:27528244

  13. 3D Seismic Imaging over a Potential Collapse Structure

    NASA Astrophysics Data System (ADS)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  14. Venus Topography in 3D: Imaging of Coronae and Chasmata

    NASA Astrophysics Data System (ADS)

    Jurdy, D. M.; Stefanick, M.; Stoddard, P. R.

    2006-12-01

    Venus' surface hosts hundreds of circular to elongate features, ranging from 60-2600 km, and averaging somewhat over 200 km, in diameter. These enigmatic structures have been classified as "coronae" and attributed to either tectono-volcanic or impact-related mechanisms. A linear to arcuate system of chasmata - rugged zones with some of Venus' deepest troughs, extend 1000's of kilometers. They have extreme relief, with elevations changing as much as 7 km in just 30 km distance. The 54,464 km-long Venus chasmata system defined in great detail by Magellan can be fit by great circle arcs at the 89.6% level, and when corrected for the smaller size of the planet, the total length of the chasmata system measures within 2.7% of the length of Earth's spreading ridges. The relatively young Beta-Atla-Themis region (BAT), within 30° of the equator from 180-300° longitude has the planet's strongest geoid highs and profuse volcanism. This BAT region, the intersection of three rift zones, also has a high coronal concentration, with individual coronae closely associated with the chasmata system. The chasmata with the greatest relief on Venus show linear rifting that prevailed in the latest stage of tectonic deformation. For a three-dimensional view of Venus' surface, we spread out the Magellan topography on a flat surface using a Mercator projection to preserve shape. Next we illuminate the surface with beams at angle 45° from left (or right) so as to simulate mid afternoon (or mid-morning). Finally, we observe the surface with two eyes looking through orange and azure colored filters respectively. This gives a 3D view of tectonic features in the BAT area. The 3D images clearly show coronae sharing boundaries with the chasmata. This suggests that the processes of rifting and corona-formation occur together. It seems unlikely that impact craters would create this pattern.

  15. [3D virtual imaging of the upper airways].

    PubMed

    Ferretti, G; Coulomb, M

    2000-04-01

    The different three dimensional reconstructions of the upper airways that can be obtained with spiral computed tomograpy (CT) are presented here. The parameters indispensable to achieve as real as possible spiral CT images are recalled together with the advantages and disadvantages of the different techniues. Multislice reconstruction (MSR) produces slices in different planes of space with the high contrast of CT slices. They provide information similar to that obtained for the rare indications for thoracic MRI. Thick slice reconstructions with maximum intensity projection (MIP) or minimum intensity projection (minIP) give projection views where the contrast can be modified by selecting the more dense (MIP) or less dense (minIP) voxels. They find their application in the exploration of the upper airways. Surface and volume external 3D reconstructions can be obtained. They give an overall view of the upper airways, similar to a bronchogram. Virtual endoscopy reproduces real endoscopic images but cannot provide information on the aspect of the mucosa or biopsy specimens. It offers possible applications for preparing, guiding and controlling interventional fibroscopy procedures.

  16. Multiframe image point matching and 3-d surface reconstruction.

    PubMed

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  17. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    NASA Astrophysics Data System (ADS)

    Ranjan Gartia, Manas; Hsiao, Austin; Sivaguru, Mayandi; Chen, Yi; Logan Liu, G.

    2011-09-01

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  18. Advanced 3D polarimetric flash ladar imaging through foliage

    NASA Astrophysics Data System (ADS)

    Murray, James T.; Moran, Steven E.; Roddier, Nicolas; Vercillo, Richard; Bridges, Robert; Austin, William

    2003-08-01

    High-resolution three-dimensional flash ladar system technologies are under development that enables remote identification of vehicles and armament hidden by heavy tree canopies. We have developed a sensor architecture and design that employs a 3D flash ladar receiver to address this mission. The receiver captures 128×128×>30 three-dimensional images for each laser pulse fired. The voxel size of the image is 3"×3"×4" at the target location. A novel signal-processing algorithm has been developed that achieves sub-voxel (sub-inch) range precision estimates of target locations within each pixel. Polarization discrimination is implemented to augment the target-to-foliage contrast. When employed, this method improves the range resolution of the system beyond the classical limit (based on pulsewidth and detection bandwidth). Experiments were performed with a 6 ns long transmitter pulsewidth that demonstrate 1-inch range resolution of a tank-like target that is occluded by foliage and a range precision of 0.3" for unoccluded targets.

  19. Improved Second-Generation 3-D Volumetric Display System. Revision 2

    DTIC Science & Technology

    1998-10-01

    2 mm 2 Watt The factor of 0.7 is used here to account for the 5 14-nm laser wavelength instead of the 555-nm peak of the photopic curve . For a spot...lasers over a 40-minute time period. The spikes in the curves are due to a defective power meter and are not real. The Coherent had virtually single...visible three-dimensional images. A primary element in the helical display system is a rotating helically curved screen, referred to as the "helix

  20. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay.

    PubMed

    Liao, Hongen; Ishihara, Hirotaka; Tran, Huy Hoang; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi

    2010-01-01

    This paper describes a precision-guided surgical navigation system for minimally invasive surgery. The system combines a laser guidance technique with a three-dimensional (3D) autostereoscopic image overlay technique. Images of surgical anatomic structures superimposed onto the patient are created by employing an animated imaging method called integral videography (IV), which can display geometrically accurate 3D autostereoscopic images and reproduce motion parallax without the need for special viewing or tracking devices. To improve the placement accuracy of surgical instruments, we integrated an image overlay system with a laser guidance system for alignment of the surgical instrument and better visualization of patient's internal structure. We fabricated a laser guidance device and mounted it on an IV image overlay device. Experimental evaluations showed that the system could guide a linear surgical instrument toward a target with an average error of 2.48 mm and standard deviation of 1.76 mm. Further improvement to the design of the laser guidance device and the patient-image registration procedure of the IV image overlay will make this system practical; its use would increase surgical accuracy and reduce invasiveness.

  1. 3D-3D registration of partial capitate bones using spin-images

    NASA Astrophysics Data System (ADS)

    Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin

    2013-03-01

    It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be