Science.gov

Sample records for 3d display systems

  1. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  2. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  3. Step barrier system multiview glassless 3D display

    NASA Astrophysics Data System (ADS)

    Mashitani, Ken; Hamagishi, Goro; Higashino, Masahiro; Ando, Takahisa; Takemoto, Satoshi

    2004-05-01

    The step barrier technology with multiple parallax images has overcome the problem of conventional parallax barrier system that the image quality of each image deteriorates only in the horizontal direction. The step barrier distributes the resolution problem both to the horizontal and the vertical directions. The system has a simple structure, which consists of a flat-panel display and a step barrier. The apertures of the step barrier are not stripes but tiny rectangles that are arranged in the shape of stairs, and the sub-pixels of each image have the same arrangement. And three image processes for the system applicable to computer graphics and real image have been proposed. Then, two types of 3-D displays were developed, 22-inch model and 50-inch model. The 22-inch model employs a very high-definition liquid crystal display of 3840 x 2400 pixels. The number of parallax images is seven and the resolution of one image is 1646 x 800. The 50-inch model has four viewing points on the plasma display panel of 1280 x 768 pixels. It can provide stereoscopic animations and the resolution of one image is 960 x 256 pixels. Moreover, the structural or electric 2-D 3-D compatible system was developed.

  4. Wide-viewing-angle floating 3D display system with no 3D glasses

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Previously, the author has described a new 3D imaging technology entitled 'real depth' with several different configurations and methods of implantation. Included were several methods to 'float' images in free space. Viewers can pass their hands through the image or appear to hold it in their hands. Most implementations provide an angle of view of approximately 45 degrees. The technology produces images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. Unlike stereoscopic 3D imaging, no glasses, headgear or other viewing aids are used. In addition to providing traditional depth cues, such as perspective and background images occlusion, the technology also provides both horizontal and vertical binocular parallax producing visual accommodation and convergence which coincide. Consequently, viewing these images do not produce headaches, fatigue, or eyestrain, regardless of how long they are viewed. A method was also proposed to provide a floating image display system with a wide angle of view. Implementation of this design proved problematic, producing various image distortions. In this paper the author discloses new methods to produce aerial images with a wide angel of view and improved image quality.

  5. Multiview image integration system for glassless 3D display

    NASA Astrophysics Data System (ADS)

    Ando, Takahisa; Mashitani, Ken; Higashino, Masahiro; Kanayama, Hideyuki; Murata, Haruhiko; Funazou, Yasuo; Sakamoto, Naohisa; Hazama, Hiroshi; Ebara, Yasuo; Koyamada, Koji

    2005-03-01

    We have developed a multi-view image integration system, which combines seven parallax video images into a single video image so that it fits the parallax barrier. The apertures of this barrier are not stripes but tiny rectangles that are arranged in the shape of stairs. Commodity hardware is used to satisfy a specification which requires that the resolution of each parallax video image is SXGA(1645×800 pixel resolution), the resulting integrated image is QUXGA-W(3840×2400 pixel resolution), and the frame rate is fifteen frames per second. The point is that the system can provide with QUXGA-W video image, which corresponds to 27MB, at 15fps, that is about 2Gbps. Using the integration system and a Liquid Crystal Display with the parallax barrier, we can enjoy an immersive live video image which supports seven viewpoints without special glasses. In addition, since the system can superimpose the CG images of the relevant seven viewpoints into the live video images, it is possible to communicate with remote users by sharing a virtual object.

  6. Flatbed-type 3D display systems using integral imaging method

    NASA Astrophysics Data System (ADS)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  7. Analysis of optical characteristics of photopolymer-based VHOE for multiview autostereoscopic 3D display system

    NASA Astrophysics Data System (ADS)

    Cho, Byung-Chul; Gu, Jung-Sik; Kim, Eun-Soo

    2002-06-01

    Generally, an autostereoscopic display presents a 3D image to a viewer without the need for glasses or other encumbering viewing aids. In this paper, we propose a new autostereoscopic 3D video display system which allows viewers to observe 3D images in the same range of viewing angle. In this system, a photopolymer-based VHOE is made from volume holographic recording materials and it is used for projecting a multiview images to the spatially different directions sequentially in time. Since this technique is based on the VHOE made from the photorefractive photopolymer instead of the conventional parallax barrier or lenticular sheet, the resolution and parallax number of the proposed VHOE-based 3D display system are limited by the photopolymer's physical and optical properties. To make the photopolymer to be applicable for a multiview autostereoscopic 3D display system, the photopolymer must be capable of achieving some properties such as a low distortion of the diffracted light beam, high diffraction efficiency, and uniform intensities of the reconstructed diffracted lights from the fully recorded diffraction gratings. In this paper, the optical and physical characteristics of the DuPont HRF photopolymer-based VHOE such as a distortion of displayed image, uniformity of the diffracted light intensity, photosensitivity and diffraction efficiency are measured and discussed.

  8. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  9. Four-view stereoscopic imaging and display system for web-based 3D image communication

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  10. System crosstalk measurement of a time-sequential 3D display using ideal shutter glasses

    NASA Astrophysics Data System (ADS)

    Chen, Fu-Hao; Huang, Kuo-Chung; Lin, Lang-Chin; Chou, Yi-Heng; Lee, Kuen

    2011-03-01

    The market of stereoscopic 3D TV grows up fast recently; however, for 3D TV really taking off, the interoperability of shutter glasses (SG) to view different TV sets must be solved, so we developed a measurement method with ideal shutter glasses (ISG) to separate time-sequential stereoscopic displays and SG. For measuring the crosstalk from time-sequential stereoscopic 3D displays, the influences from SG must be eliminated. The advantages are that the sources to crosstalk are distinguished, and the interoperability of SG is broadened. Hence, this paper proposed ideal shutter glasses, whose non-ideal properties are eliminated, as a platform to evaluate the crosstalk purely from the display. In the ISG method, the illuminance of the display was measured in time domain to analyze the system crosstalk SCT of the display. In this experiment, the ISG method was used to measure SCT with a high-speed-response illuminance meter. From the time-resolved illuminance signals, the slow time response of liquid crystal leading to SCT is visualized and quantified. Furthermore, an intriguing phenomenon that SCT measured through SG increases with shortening view distance was observed, and it may arise from LC leakage of the display and shutter leakage at large view angle. Thus, we measured how LC and shutter leakage depending on view angle and verified our argument. Besides, we used the ISG method to evaluate two displays.

  11. Implementation of real-time 3D image communication system using stereoscopic imaging and display scheme

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Chul; Kim, Dong-Kyu; Ko, Jung-Hwan; Kim, Eun-Soo

    2004-11-01

    In this paper, a new stereoscopic 3D imaging communication system for real-time teleconferencing application is implemented by using IEEE 1394 digital cameras, Intel Xeon server computer system and Microsoft"s DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate. In the proposed system, two-view images are captured by using two digital cameras and processed in the Intel Xeon server computer system. And then, disparity data is extracted from them and transmitted to the client system with the left image through an information network and in the recipient two-view images are reconstructed and displayed on the stereoscopic 3D display system. The program for controlling the overall system is developed using the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display stereoscopic images in real-time with a full-color of 16 bits and a frame rate of 15fps.

  12. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  13. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed. PMID:27137284

  14. Holographic display system for dynamic synthesis of 3D light fields with increased space bandwidth product.

    PubMed

    Agour, Mostafa; Falldorf, Claas; Bergmann, Ralf B

    2016-06-27

    We present a new method for the generation of a dynamic wave field with high space bandwidth product (SBP). The dynamic wave field is generated from several wave fields diffracted by a display which comprises multiple spatial light modulators (SLMs) each having a comparably low SBP. In contrast to similar approaches in stereoscopy, we describe how the independently generated wave fields can be coherently superposed. A major benefit of the scheme is that the display system may be extended to provide an even larger display. A compact experimental configuration which is composed of four phase-only SLMs to realize the coherent combination of independent wave fields is presented. Effects of important technical parameters of the display system on the wave field generated across the observation plane are investigated. These effects include, e.g., the tilt of the individual SLM and the gap between the active areas of multiple SLMs. As an example of application, holographic reconstruction of a 3D object with parallax effects is demonstrated. PMID:27410593

  15. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  16. Integral 3D display using multiple LCDs

    NASA Astrophysics Data System (ADS)

    Okaichi, Naoto; Miura, Masato; Arai, Jun; Mishina, Tomoyuki

    2015-03-01

    The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.

  17. Glasses-free 3D display based on micro-nano-approach and system

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Ye, Yan; Shen, Su; Pu, Donglin; Chen, Linsen

    2014-11-01

    Micro-nano optics and digital dot matrix hologram (DDMH) technique has been combined to code and fabricate glassfree 3D image. Two kinds of true color 3D DDMH have been designed. One of the design releases the fabrication complexity and the other enlarges the view angle of 3D DDMH. Chromatic aberration has been corrected using rainbow hologram technique. A holographic printing system combined the interference and projection lithography technique has been demonstrated. Fresnel lens and large view angle 3D DDMH have been outputted, excellent color performance of 3D image has been realized.

  18. 3D display and image processing system for metal bellows welding

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Son, Jung-Young

    2010-04-01

    Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.

  19. Development of a stereoscopic 3D display system to observe restored heritage

    NASA Astrophysics Data System (ADS)

    Morikawa, Hiroyuki; Kawaguchi, Mami; Kawai, Takashi; Ohya, Jun

    2004-05-01

    The authors have developed a binocular-type display system that allows digital archives of cultural assets to be viewed in their actual environment. The system is designed for installation in locations where such cultural assets were originally present. The viewer sees buildings and other heritage items as they existed historically by looking through the binoculars. Images of the cultural assets are reproduced by stereoscopic 3D CG in cyberspace, and the images are superimposed on actual images in real-time. This system consists of stereoscopic CCD cameras that capture a stereo view of the landscape and LCDs for presentation to the viewer. Virtual cameras, used to render CG images from digital archives, move in synchrony with the actual cameras, so the relative position of the CG images and the landscape on which they are superimposed is always fixed. The system has manual controls for digital zoom. Furthermore, the transparency of the CG images can be altered by the viewer. As a case study for the effectiveness of this system, the authors chose the Heijyoukyou ruins in Nara, Japan. The authors evaluate the sense of immersion, stereoscopic effect, and usability of the system.

  20. New portable FELIX 3D display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  1. FELIX: a volumetric 3D laser display

    NASA Astrophysics Data System (ADS)

    Bahr, Detlef; Langhans, Knut; Gerken, Martin; Vogt, Carsten; Bezecny, Daniel; Homann, Dennis

    1996-03-01

    In this paper, an innovative approach of a true 3D image presentation in a space filling, volumetric laser display will be described. The introduced prototype system is based on a moving target screen that sweeps the display volume. Net result is the optical equivalent of a 3D array of image points illuminated to form a model of the object which occupies a physical space. Wireframe graphics are presented within the display volume which a group of people can walk around and examine simultaneously from nearly any orientation and without any visual aids. Further to the detailed vector scanning mode, a raster scanned system and a combination of both techniques are under development. The volumetric 3D laser display technology for true reproduction of spatial images can tremendously improve the viewers ability to interpret data and to reliably determine distance, shape and orientation. Possible applications for this development range from air traffic control, where moving blips of light represent individual aircrafts in a true to scale projected airspace of an airport, to various medical applications (e.g. electrocardiography, computer-tomography), to entertainment and education visualization as well as imaging in the field of engineering and Computer Aided Design.

  2. Flight tests of advanced 3D-PFD with commercial flat-panel avionics displays and EGPWS system

    NASA Astrophysics Data System (ADS)

    He, Gang; Feyereisen, Thea; Gannon, Aaron; Wilson, Blake; Schmitt, John; Wyatt, Sandy; Engels, Jary

    2005-05-01

    This paper describes flight trials of Honeywell Advanced 3D Primary Flight Display System. The system employs a large-format flat-panel avionics display presently used in Honeywell PRIMUS EPIC flight-deck products and is coupled to an on-board EGPWS system. The heads-down primary flight display consists of dynamic primary-flight attitude information, flight-path and approach symbology similar to Honeywell HUD2020 heads-up displays, and a synthetic 3D perspective-view terrain environment generated with Honeywell"s EGPWS terrain data. Numerous flights are conducted on-board Honeywell Citation V aircraft and significant amount of pilot feedback are collected with portion of the data summarized in this paper. The system development is aimed at leveraging several well-established avionics components (HUD, EGPWS, large-format displays) in order to produce an integrated system that significantly reduces pilot workload, increases overall situation awareness, and is more beneficial to flight operations than achievable with separated systems.

  3. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  4. Spectroradiometric characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    Spectroradiometric measurements have been made for the experimental characterization of the RGB channels of autostereoscopic 3D displays, giving results for different measurement angles with respect to the normal direction of the plane of the display. In the study, 2 different models of autostereoscopic 3D displays of different sizes and resolutions were used, making measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). From the measurements made, goniometric results were recorded for luminance contrast, and the fundamental hypotheses have been evaluated for the characterization of the displays: independence of the RGB channels and their constancy. The results show that the display with the lower angle variability in the contrast-ratio value and constancy of the chromaticity coordinates nevertheless presented the greatest additivity deviations with the measurement angle. For both displays, when the parameters evaluated were taken into account, lower angle variability consistently resulted in the 2D mode than in the 3D mode.

  5. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc. PMID:25361316

  6. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  7. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    NASA Astrophysics Data System (ADS)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  8. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  9. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    NASA Astrophysics Data System (ADS)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  10. Time-sequential autostereoscopic 3-D display with a novel directional backlight system based on volume-holographic optical elements.

    PubMed

    Hwang, Yong Seok; Bruder, Friedrich-Karl; Fäcke, Thomas; Kim, Seung-Cheol; Walze, Günther; Hagen, Rainer; Kim, Eun-Soo

    2014-04-21

    A novel directional backlight system based on volume-holographic optical elements (VHOEs) is demonstrated for time-sequential autostereoscopic three-dimensional (3-D) flat-panel displays. Here, VHOEs are employed to control the direction of light for a time-multiplexed display for each of the left and the right view. Those VHOEs are fabricated by recording interference patterns between collimated reference beams and diverging object beams for each of the left and right eyes on the volume holographic recording material. For this, self-developing photopolymer films (Bayfol® HX) were used, since those simplify the manufacturing process of VHOEs substantially. Here, the directional lights are similar to the collimated reference beams that were used to record the VHOEs and create two diffracted beams similar to the object beams used for recording the VHOEs. Then, those diffracted beams read the left and right images alternately shown on the LCD panel and form two converging viewing zones in front of the user's eyes. By this he can perceive the 3-D image. Theoretical predictions and experimental results are presented and the performance of the developed prototype is shown. PMID:24787867

  11. 3D optical see-through head-mounted display based augmented reality system and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenliang; Weng, Dongdong; Liu, Yue; Xiang, Li

    2015-07-01

    The combination of health and entertainment becomes possible due to the development of wearable augmented reality equipment and corresponding application software. In this paper, we implemented a fast calibration extended from SPAAM for an optical see-through head-mounted display (OSTHMD) which was made in our lab. During the calibration, the tracking and recognition techniques upon natural targets were used, and the spatial corresponding points had been set in dispersed and well-distributed positions. We evaluated the precision of this calibration, in which the view angle ranged from 0 degree to 70 degrees. Relying on the results above, we calculated the position of human eyes relative to the world coordinate system and rendered 3D objects in real time with arbitrary complexity on OSTHMD, which accurately matched the real world. Finally, we gave the degree of satisfaction about our device in the combination of entertainment and prevention of cervical vertebra diseases through user feedbacks.

  12. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  13. Development of an automultiscopic true 3D display (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Kurtz, Russell M.; Pradhan, Ranjit D.; Aye, Tin M.; Yu, Kevin H.; Okorogu, Albert O.; Chua, Kang-Bin; Tun, Nay; Win, Tin; Schindler, Axel

    2005-05-01

    True 3D displays, whether generated by volume holography, merged stereopsis (requiring glasses), or autostereoscopic methods (stereopsis without the need for special glasses), are useful in a great number of applications, ranging from training through product visualization to computer gaming. Holography provides an excellent 3D image but cannot yet be produced in real time, merged stereopsis results in accommodation-convergence conflict (where distance cues generated by the 3D appearance of the image conflict with those obtained from the angular position of the eyes) and lacks parallax cues, and autostereoscopy produces a 3D image visible only from a small region of space. Physical Optics Corporation is developing the next step in real-time 3D displays, the automultiscopic system, which eliminates accommodation-convergence conflict, produces 3D imagery from any position around the display, and includes true image parallax. Theory of automultiscopic display systems is presented, together with results from our prototype display, which produces 3D video imagery with full parallax cues from any viewing direction.

  14. Recent developments in stereoscopic and holographic 3D display technologies

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri

    2014-06-01

    Currently, there is increasing interest in the development of high performance 3D display technologies to support a variety of applications including medical imaging, scientific visualization, gaming, education, entertainment, air traffic control and remote operations in 3D environments. In this paper we will review the attributes of the various 3D display technologies including stereoscopic and holographic 3D, human factors issues of stereoscopic 3D, the challenges in realizing Holographic 3D displays and the recent progress in these technologies.

  15. Transparent 3D display for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Hong, Jisoo

    2012-11-01

    Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.

  16. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  17. In memoriam: Fumio Okano, innovator of 3D display

    NASA Astrophysics Data System (ADS)

    Arai, Jun

    2014-06-01

    Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.

  18. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  19. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  20. 3-D TV and display using multiview

    NASA Astrophysics Data System (ADS)

    Son, Jung-Young; Kim, Shin-Hwan; Park, Min-Chul; Kim, Sung-Kyu

    2008-04-01

    The current multiview 3 dimensional imaging systems are mostly based on a multiview image set. Depending on the methods of presenting and arranging the image set on a display panel or a screen, the systems are basically classified into contact- and projection-type. The contact-type is further classified into MV(Multiview), IP(Integral Photography), Multiple Image, FLA(Focused light array) and Tracking. The depth cue provided by those types are both binocular and motion parallaxes. The differences between the methods in a same type can only be identified by the composition of images projected to viewer eyes at the viewing regions.

  1. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  2. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  3. Tangible holography: adding synthetic touch to 3D display

    NASA Astrophysics Data System (ADS)

    Plesniak, Wendy J.; Klug, Michael A.

    1997-04-01

    Just as we expect holographic technology to become a more pervasive and affordable instrument of information display, so too will high fidelity force-feedback devices. We describe a testbed system which uses both of these technologies to provide simultaneous, coincident visuo- haptic spatial display of a 3D scene. The system provides the user with a stylus to probe a geometric model that is also presented visually in full parallax. The haptics apparatus is a six degree-of-freedom mechanical device with servomotors providing active force display. This device is controlled by a free-running server that simulates static geometric models with tactile and bulk material properties, all under ongoing specification by a client program. The visual display is a full parallax edge-illuminated holographic stereogram with a wide angle of view. Both simulations, haptic and visual, represent the same scene. The haptic and visual displays are carefully scaled and aligned to provide coincident display, and together they permit the user to explore the model's 3D shape, texture and compliance.

  4. 3D touchable holographic light-field display.

    PubMed

    Yamaguchi, Masahiro; Higashida, Ryo

    2016-01-20

    We propose a new type of 3D user interface: interaction with a light field reproduced by a 3D display. The 3D display used in this work reproduces a 3D light field, and a real image can be reproduced in midair between the display and the user. When using a finger to touch the real image, the light field from the display will scatter. Then, the 3D touch sensing is realized by detecting the scattered light by a color camera. In the experiment, the light-field display is constructed with a holographic screen and a projector; thus, a preliminary implementation of a 3D touch is demonstrated. PMID:26835952

  5. Projection type transparent 3D display using active screen

    NASA Astrophysics Data System (ADS)

    Kamoshita, Hiroki; Yendo, Tomohiro

    2015-05-01

    Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.

  6. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  7. Active and interactive floating image display using holographic 3D images

    NASA Astrophysics Data System (ADS)

    Morii, Tsutomu; Sakamoto, Kunio

    2006-08-01

    We developed a prototype tabletop holographic display system. This system consists of the object recognition system and the spatial imaging system. In this paper, we describe the recognition system using an RFID tag and the 3D display system using a holographic technology. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1,2,3. The purpose of this paper is to propose the interactive system using these 3D imaging technologies. In this paper, the authors describe the interactive tabletop 3D display system. The observer can view virtual images when the user puts the special object on the display table. The key technologies of this system are the object recognition system and the spatial imaging display.

  8. A low-resolution 3D holographic volumetric display

    NASA Astrophysics Data System (ADS)

    Khan, Javid; Underwood, Ian; Greenaway, Alan; Halonen, Mikko

    2010-05-01

    A simple low resolution volumetric display is presented, based on holographic volume-segments. The display system comprises a proprietary holographic screen, laser projector, associated optics plus a control unit. The holographic screen resembles a sheet of frosted glass about A4 in size (20x30cm). The holographic screen is rear-illuminated by the laser projector, which is in turn driven by the controller, to produce simple 3D images that appear outside the plane of the screen. A series of spatially multiplexed and interleaved interference patterns are pre-encoded across the surface of the holographic screen. Each illumination pattern is capable of reconstructing a single holographic volume-segment. Up to nine holograms are multiplexed on the holographic screen in a variety of configurations including a series of numeric and segmented digits. The demonstrator has good results under laboratory conditions with moving colour 3D images in front of or behind the holographic screen.

  9. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  10. True-Depth: a new type of true 3D volumetric display system suitable for CAD, medical imaging, and air-traffic control

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Floating Images, Inc. is developing a new type of volumetric monitor capable of producing a high-density set of points in 3D space. Since the points of light actually exist in space, the resulting image can be viewed with continuous parallax, both vertically and horizontally, with no headache or eyestrain. These 'real' points in space are always viewed with a perfect match between accommodation and convergence. All scanned points appear to the viewer simultaneously, making this display especially suitable for CAD, medical imaging, air-traffic control, and various military applications. This system has the potential to display imagery so accurately that a ruler could be placed within the aerial image to provide precise measurement in any direction. A special virtual imaging arrangement allows the user to superimpose 3D images on a solid object, making the object look transparent. This is particularly useful for minimally invasive surgery in which the internal structure of a patient is visible to a surgeon in 3D. Surgical procedures can be carried out through the smallest possible hole while the surgeon watches the procedure from outside the body as if the patient were transparent. Unlike other attempts to produce volumetric imaging, this system uses no massive rotating screen or any screen at all, eliminating down time due to breakage and possible danger due to potential mechanical failure. Additionally, it is also capable of displaying very large images.

  11. Will true 3d display devices aid geologic interpretation. [Mirage

    SciTech Connect

    Nelson, H.R. Jr.

    1982-04-01

    A description is given of true 3D display devices and techniques that are being evaluated in various research laboratories around the world. These advances are closely tied to the expected application of 3D display devices as interpretational tools for explorationists. 34 refs.

  12. Optical characterization and measurements of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Salmimaa, Marja; Järvenpää, Toni

    2008-04-01

    3D or autostereoscopic display technologies offer attractive solutions for enriching the multimedia experience. However, both characterization and comparison of 3D displays have been challenging when the definitions for the consistent measurement methods have been lacking and displays with similar specifications may appear quite different. Earlier we have investigated how the optical properties of autostereoscopic (3D) displays can be objectively measured and what are the main characteristics defining the perceived image quality. In this paper the discussion is extended to cover the viewing freedom (VF) and the definition for the optimum viewing distance (OVD) is elaborated. VF is the volume inside which the eyes have to be to see an acceptable 3D image. Characteristics limiting the VF space are proposed to be 3D crosstalk, luminance difference and color difference. Since the 3D crosstalk can be presumed to be dominating the quality of the end user experience and in our approach is forming the basis for the calculations of the other optical parameters, the reliability of the 3D crosstalk measurements is investigated. Furthermore the effect on the derived VF definition is evaluated. We have performed comparison 3D crosstalk measurements with different measurement device apertures and the effect of different measurement geometry on the results on actual 3D displays is reported.

  13. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  14. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  15. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  16. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  17. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  18. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  19. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  20. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  1. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  2. 3D head mount display with single panel

    NASA Astrophysics Data System (ADS)

    Wang, Yuchang; Huang, Junejei

    2014-09-01

    The head mount display for entertainment usually requires light weight. But in the professional application has more requirements. The image quality, field of view (FOV), color gamut, response and life time are considered items, too. A head mount display based on 1-chip TI DMD spatial light modulator is proposed. The multiple light sources and splitting images relay system are the major design tasks. The relay system images the object (DMD) into two image planes to crate binocular vision. The 0.65 inch 1080P DMD is adopted. The relay has a good performance which includes the doublet to reduce the chromatic aberration. Some spaces are reserved for placing the mirror and adjustable mechanism. The mirror splits the rays to the left and right image plane. These planes correspond to the eyepieces objects and image to eyes. A changeable mechanism provides the variable interpupillary distance (IPD). The folding optical path makes sure that the HMD center of gravity is close to the head and prevents the uncomfortable downward force being applied to head or orbit. Two RGB LED assemblies illuminate to the DMD in different angle. The light is highly collimated. The divergence angle is small enough such that one LED ray would only enters to the correct eyepiece. This switching is electronic controlled. There is no moving part to produce vibration and fast switch would be possible. Two LED synchronize with 3D video sync by a driving board which also controls the DMD. When the left eye image is displayed on DMD, the LED for left optical path turns on. Vice versa for right image and 3D scene is accomplished.

  3. High-definition 3D display for training applications

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy

    2010-04-01

    In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.

  4. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  5. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema. PMID:25503026

  6. True 3D displays for avionics and mission crewstations

    NASA Astrophysics Data System (ADS)

    Sholler, Elizabeth A.; Meyer, Frederick M.; Lucente, Mark E.; Hopper, Darrel G.

    1997-07-01

    3D threat projection has been shown to decrease the human recognition time for events, especially for a jet fighter pilot or C4I sensor operator when the advantage of realization that a hostile threat condition exists is the basis of survival. Decreased threat recognition time improves the survival rate and results from more effective presentation techniques, including the visual cue of true 3D (T3D) display. The concept of 'font' describes the approach adopted here, but whereas a 2D font comprises pixel bitmaps, a T3D font herein comprises a set of hologram bitmaps. The T3D font bitmaps are pre-computed, stored, and retrieved as needed to build images comprising symbols and/or characters. Human performance improvement, hologram generation for a T3D symbol font, projection requirements, and potential hardware implementation schemes are described. The goal is to employ computer-generated holography to create T3D depictions of a dynamic threat environments using fieldable hardware.

  7. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  8. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  9. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  10. Monocular 3D see-through head-mounted display via complex amplitude modulation.

    PubMed

    Gao, Qiankun; Liu, Juan; Han, Jian; Li, Xin

    2016-07-25

    The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display. PMID:27464184

  11. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  12. 3D display design concept for cockpit and mission crewstations

    NASA Astrophysics Data System (ADS)

    Thayn, Jarod R.; Ghrayeb, Joseph; Hopper, Darrel G.

    1999-08-01

    Simple visual cues increase human awareness and perception and decrease reaction times. Humans are visual beings requiring visual cues to warn them of impending danger especially on combat aviation. The simplest cues are those that allow the individual to immerse themselves in the situations to which they must respond. Two-dimensional (2-D) display technology has real limits on what types of information and how much information it can present to the viewer without becoming disorienting or confusing. True situational awareness requires a transition from 2-D to three-dimensional (3-D) display technology.

  13. Study on basic problems in real-time 3D holographic display

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Liu, Juan; Wang, Yongtian; Pan, Yijie; Li, Xin

    2013-05-01

    In recent years, real-time three-dimensional (3D) holographic display has attracted more and more attentions. Since a holographic display can entirely reconstruct the wavefront of an actual 3D scene, it can provide all the depth cues for human eye's observation and perception, and it is believed to be the most promising technology for future 3D display. However, there are several unsolved basic problems for realizing large-size real-time 3D holographic display with a wide field of view. For examples, commercial pixelated spatial light modulators (SLM) always lead to zero-order intensity distortion; 3D holographic display needs a huge number of sampling points for the actual objects or scenes, resulting in enormous computational time; The size and the viewing zone of the reconstructed 3D optical image are limited by the space bandwidth product of the SLM; Noise from the coherent light source as well as from the system severely degrades the quality of the 3D image; and so on. Our work is focused on these basic problems, and some initial results are presented, including a technique derived theoretically and verified experimentally to eliminate the zero-order beam caused by a pixelated phase-only SLM; a method to enlarge the reconstructed 3D image and shorten the reconstruction distance using a concave reflecting mirror; and several algorithms to speed up the calculation of computer generated holograms (CGH) for the display.

  14. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  15. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645

  16. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  17. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention

  18. A 360-degree floating 3D display based on light field regeneration.

    PubMed

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-01

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method. PMID:23669981

  19. Crosstalk in automultiscopic 3-D displays: blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Jain, Ashish; Konrad, Janusz

    2007-02-01

    Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.

  20. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  1. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  2. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  3. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  4. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  5. Clinical evaluation of accommodation and ocular surface stability relavant to visual asthenopia with 3D displays

    PubMed Central

    2014-01-01

    Background To validate the association between accommodation and visual asthenopia by measuring objective accommodative amplitude with the Optical Quality Analysis System (OQAS®, Visiometrics, Terrassa, Spain), and to investigate associations among accommodation, ocular surface instability, and visual asthenopia while viewing 3D displays. Methods Fifteen normal adults without any ocular disease or surgical history watched the same 3D and 2D displays for 30 minutes. Accommodative ability, ocular protection index (OPI), and total ocular symptom scores were evaluated before and after viewing the 3D and 2D displays. Accommodative ability was evaluated by the near point of accommodation (NPA) and OQAS to ensure reliability. The OPI was calculated by dividing the tear breakup time (TBUT) by the interblink interval (IBI). The changes in accommodative ability, OPI, and total ocular symptom scores after viewing 3D and 2D displays were evaluated. Results Accommodative ability evaluated by NPA and OQAS, OPI, and total ocular symptom scores changed significantly after 3D viewing (p = 0.005, 0.003, 0.006, and 0.003, respectively), but yielded no difference after 2D viewing. The objective measurement by OQAS verified the decrease of accommodative ability while viewing 3D displays. The change of NPA, OPI, and total ocular symptom scores after 3D viewing had a significant correlation (p < 0.05), implying direct associations among these factors. Conclusions The decrease of accommodative ability after 3D viewing was validated by both subjective and objective methods in our study. Further, the deterioration of accommodative ability and ocular surface stability may be causative factors of visual asthenopia in individuals viewing 3D displays. PMID:24612686

  6. Future of photorefractive based holographic 3D display

    NASA Astrophysics Data System (ADS)

    Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.

    2010-02-01

    The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.

  7. Comprehensive evaluation of latest 2D/3D monitors and comparison to a custom-built 3D mirror-based display in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Wilhelm, Dirk; Reiser, Silvano; Kohn, Nils; Witte, Michael; Leiner, Ulrich; Mühlbach, Lothar; Ruschin, Detlef; Reiner, Wolfgang; Feussner, Hubertus

    2014-03-01

    Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation.

  8. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart. PMID:26074575

  11. Low-cost approach of a 3D display for general aviation aircraft

    NASA Astrophysics Data System (ADS)

    Sachs, Gottfried; Sperl, Roman; Karl, Wunibald

    2001-08-01

    A low cost 3D-display and navigation system is described which presents guidance information in a 3-dimensional format to the pilot. For achieving the low cost goal, Commercial-off-the-Shelf components are used. The visual information provided by the 3D-display includes a presentation of the future flight path and other guidance elements as well as an image of the outside world. For generating the displayed information, a PC will be used. An appropriate computer software is available to generate the displayed information in real-time with an adequately high update rate. Precision navigation data which is required for accurately adjusting the displayed guidance information are provided by an integrated low cost navigation system. This navigation system consists of a differential global positioning system and an inertial measurement unit. Data from the navigation system is fed into an onboard-computer, using terrain elevation and feature analysis data to generate a synthetic image of the outside world. The system is intended to contribute to the safety of General Aviation aircraft, providing an affordable guidance and navigation aid for this type of aircraft. The low cost 3D display and navigation system will be installed in a two-seat Grob 109B aircraft which is operated by the Institute of Flight Mechanics and Flight Control of the Technische Universitchen as a research vehicle.

  12. Dual side transparent OLED 3D display using Gabor super-lens

    NASA Astrophysics Data System (ADS)

    Chestak, Sergey; Kim, Dae-Sik; Cho, Sung-Woo

    2015-03-01

    We devised dual side transparent 3D display using transparent OLED panel and two lenticular arrays. The OLED panel is sandwiched between two parallel confocal lenticular arrays, forming Gabor super-lens. The display provides dual side stereoscopic 3D imaging and floating image of the object, placed behind it. The floating image can be superimposed with the displayed 3D image. The displayed autostereoscopic 3D images are composed of 4 views, each with resolution 64x90 pix.

  13. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  14. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  15. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display. PMID:27136814

  16. Web-based intermediate view reconstruction for multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Kyu; Lee, Won-Kyung; Ko, Jung-Hwan; Bae, Kyung-hoon; Kim, Eun-Soo

    2005-08-01

    In this paper, web-based intermediate view reconstruction for multiview stereoscopic 3D display system is proposed by using stereo cameras and disparity maps, Intel Xeon server computer system and Microsoft's DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate and number of views. In the proposed system, stereo images are initially captured by using stereo digital cameras and then, these are processed in the Intel Xeon server computer system. And then, the captured two-view image data is compressed by extraction of disparity data between them and transmitted to another client system through the information network, in which the received stereo data is displayed on the 16-view stereoscopic 3D display system by using intermediate view reconstruction. The program for controlling the overall system is developed based on the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display 16-view 3D images with a gray of 8bits and a frame rate of 15fps in real-time.

  17. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  18. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  19. IPMC actuator array as a 3D haptic display

    NASA Astrophysics Data System (ADS)

    Nakano, Masanori; Mazzone, Andrea; Piffaretti, Filippo; Gassert, Roger; Nakao, Masayuki; Bleuler, Hannes

    2005-05-01

    Based on the concept of Mazzone et al., we have designed a novel system to be used simultaneously as an input and output device for designing, presenting, or recognizing objects in three-dimensional space. Unlike state of the art stereoscopic display technologies that generate a virtual image of a three-dimensional object, the proposed system, a "digital clay" like device, physically imitates the desired object. The object can not only be touched and explored intuitively but also deform itself physically. In order to succeed in developing such a deformable structure, self-actuating ionic polymer-metal composite (IPMC) materials are proposed. IPMC is a type of electro active polymer (EAP) and has recently been drawing much attention. It has high force to weight ratio and shape flexibility, making it ideal for robotic applications. This paper introduces the first steps and results in the attempt of developing such a structure. A strip consisting of four actuators arranged in line was fabricated and evaluated, showing promising capabilities in deforming two-dimensionally. A simple model to simulate the deformation of an IPMC actuator using finite element methods (FEM) is also proposed and compared with the experimental results. The model can easily be implemented into computer aided engineering (CAE) software. This will expand the application possibilities of IPMCs. Furthermore, a novel method for creating multiple actuators on one membrane with a laser machining tool is introduced.

  20. Development and test of a low-cost 3D display for small aircraft

    NASA Astrophysics Data System (ADS)

    Sachs, Gottfried; Sperl, Roman; Nothnagel, Klaus

    2002-07-01

    A low-cost 3D-display and navigation system providing guidance information in a 3-dimensional format is described. The system including a LC display, a PC based computer for generating the 3-dimensional guidance information, a navigation system providing D/GPS and inertial sensor based position and attitude data was realized using Commercial-off-the-Shelf components. An efficient computer software has been developed to generate the 3-dimensional guidance information with a high update rate. The guidance concept comprises an image of the outside world as well as a presentation of the command flight path, a predictor and other guidance elements in a 3-dimensional format.

  1. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  2. Display system

    NASA Technical Reports Server (NTRS)

    Story, A. W. (Inventor)

    1973-01-01

    A situational display and a means for creating the display are disclosed. The display comprises a moving line or raster, on a cathode ray tube, which is disposed intermediate of two columns of lamps or intensifications on the cathode ray tube. The raster and lights are controlled in such a manner that pairs of lights define a line which is either tracked or chased by the raster in accordance with the relationship between the optimum and actual values of a monitored parameter.

  3. Special subpixel arrangement-based 3D display with high horizontal resolution.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wu, Fei

    2014-11-01

    A special subpixel arrangement-based 3D display is proposed. This display consists of a 2D display panel and a parallax barrier. On the 2D display panel, subpixels have a special arrangement, so they can redefine the formation of color pixels. This subpixel arrangement can bring about triple horizontal resolution for a conventional 2D display panel. Therefore, when these pixels are modulated by the parallax barrier, the 3D images formed also have triple horizontal resolution. A prototype of this display is developed. Experimental results show that this display with triple horizontal resolution can produce a better display effect than the conventional one. PMID:25402897

  4. Assessment of eye fatigue caused by 3D displays based on multimodal measurements.

    PubMed

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-01-01

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315

  5. Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements

    PubMed Central

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-01-01

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315

  6. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  7. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images. PMID:27410090

  8. Integration of multiple view plus depth data for free viewpoint 3D display

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Yoshida, Yuko; Kawamoto, Tetsuya; Fujii, Toshiaki; Mase, Kenji

    2014-03-01

    This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.

  9. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  10. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  11. 3D Navigation and Integrated Hazard Display in Advanced Avionics: Workload, Performance, and Situation Awareness

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Alexander, Amy L.

    2004-01-01

    We examined the ability for pilots to estimate traffic location in an Integrated Hazard Display, and how such estimations should be measured. Twelve pilots viewed static images of traffic scenarios and then estimated the outside world locations of queried traffic represented in one of three display types (2D coplanar, 3D exocentric, and split-screen) and in one of four conditions (display present/blank crossed with outside world present/blank). Overall, the 2D coplanar display best supported both vertical (compared to 3D) and lateral (compared to split-screen) traffic position estimation performance. Costs of the 3D display were associated with perceptual ambiguity. Costs of the split screen display were inferred to result from inappropriate attention allocation. Furthermore, although pilots were faster in estimating traffic locations when relying on memory, accuracy was greatest when the display was available.

  12. Investigation of a 3D head-mounted projection display using retro-reflective screen.

    PubMed

    Héricz, Dalma; Sarkadi, Tamás; Lucza, Viktor; Kovács, Viktor; Koppa, Pál

    2014-07-28

    We propose a compact head-worn 3D display which provides glasses-free full motion parallax. Two picoprojectors placed on the viewer's head project images on a retro-reflective screen that reflects left and right images to the appropriate eyes of the viewer. The properties of different retro-reflective screen materials have been investigated, and the key parameters of the projection - brightness and cross-talk - have been calculated. A demonstration system comprising two projectors, a screen tracking system and a commercial retro-reflective screen has been developed to test the visual quality of the proposed approach. PMID:25089403

  13. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  14. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  15. A guide for human factors research with stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Pinkus, Alan R.

    2015-05-01

    In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.

  16. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  17. Front and rear projection autostereoscopic 3D displays based on lenticular sheets

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Zang, Shang-Fei; Qi, Lin

    2015-03-01

    A front projection autostereoscopic display is proposed. The display is composed of eight projectors and a 3D-imageguided screen which having a lenticular sheet and a retro-reflective diffusion screen. Based on the optical multiplexing and de-multiplexing, the optical functions of the 3D-image-guided screen are parallax image interlacing and viewseparating, which is capable of reconstructing 3D images without quality degradation from the front direction. The operating principle, optical design calculation equations and correction method of parallax images are given. A prototype of the front projection autostereoscopic display is developed, which enhances the brightness and 3D perceptions, and improves space efficiency. The performance of this prototype is evaluated by measuring the luminance and crosstalk distribution along the horizontal direction at the optimum viewing distance. We also propose a rear projection autostereoscopic display. The display consists of eight projectors, a projection screen, and two lenticular sheets. The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography. A prototype of the rear projection autostereoscopic display is developed. The normalized luminance distributions of viewing zones from the measurement are given. Results agree well with the designed values. The prototype presents high resolution and high brightness 3D images. The research has potential applications in some commercial entertainments and movies for the realistic 3D perceptions.

  18. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  19. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  20. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  1. Fast-response switchable lens for 3D and wearable displays.

    PubMed

    Lee, Yun-Han; Peng, Fenglin; Wu, Shin-Tson

    2016-01-25

    We report a switchable lens in which a twisted nematic (TN) liquid crystal cell is utilized to control the input polarization. Different polarization state leads to different path length in the proposed optical system, which in turn results in different focal length. This type of switchable lens has advantages in fast response time, low operation voltage, and inherently lower chromatic aberration. Using a pixelated TN panel, we can create depth information to the selected pixels and thus add depth information to a 2D image. By cascading three such device structures together, we can generate 8 different focuses for 3D displays, wearable virtual/augmented reality, and other head mounted display devices. PMID:26832545

  2. Recent research results in stereo 3-D pictorial displays at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.

    1990-01-01

    Recent results from a NASA-Langley program which addressed stereo 3D pictorial displays from a comprehensive standpoint are reviewed. The program dealt with human factors issues and display technology aspects, as well as flight display applications. The human factors findings include addressing a fundamental issue challenging the application of stereoscopic displays in head-down flight applications, with the determination that stereoacuity is unaffected by the short-term use of stereo 3D displays. While stereoacuity has been a traditional measurement of depth perception abilities, it is a measure of relative depth, rather than actual depth (absolute depth). Therefore, depth perception effects based on size and distance judgments and long-term stereo exposure remain issues to be investigated. The applications of stereo 3D to pictorial flight displays within the program have repeatedly demonstrated increases in pilot situational awareness and task performance improvements. Moreover, these improvements have been obtained within the constraints of the limited viewing volume available with conventional stereo displays. A number of stereo 3D pictorial display applications are described, including recovery from flight-path offset, helicopter hover, and emulated helmet-mounted display.

  3. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7. PMID:26907057

  4. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    NASA Astrophysics Data System (ADS)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  5. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  6. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Astrophysics Data System (ADS)

    Ericson, Mark; McKinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  7. Standardization based on human factors for 3D display: performance characteristics and measurement methods

    NASA Astrophysics Data System (ADS)

    Uehara, Shin-ichi; Ujike, Hiroyasu; Hamagishi, Goro; Taira, Kazuki; Koike, Takafumi; Kato, Chiaki; Nomura, Toshio; Horikoshi, Tsutomu; Mashitani, Ken; Yuuki, Akimasa; Izumi, Kuniaki; Hisatake, Yuzo; Watanabe, Naoko; Umezu, Naoaki; Nakano, Yoshihiko

    2010-02-01

    We are engaged in international standardization activities for 3D displays. We consider that for a sound development of 3D displays' market, the standards should be based on not only mechanism of 3D displays, but also human factors for stereopsis. However, we think that there is no common understanding on what the 3D display should be and that the situation makes developing the standards difficult. In this paper, to understand the mechanism and human factors, we focus on a double image, which occurs in some conditions on an autostereoscopic display. Although the double image is generally considered as an unwanted effect, we consider that whether the double image is unwanted or not depends on the situation and that there are some allowable double images. We tried to classify the double images into the unwanted and the allowable in terms of the display mechanism and visual ergonomics for stereopsis. The issues associated with the double image are closely related to performance characteristics for the autostereoscopic display. We also propose performance characteristics, measurement and analysis methods to represent interocular crosstalk and motion parallax.

  8. Three-dimensional hologram display system

    NASA Technical Reports Server (NTRS)

    Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)

    2009-01-01

    The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.

  9. 3D Multifunctional Ablative Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken

    2015-01-01

    NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.

  10. Color and brightness uniformity compensation of a multi-projection 3D display

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Juyong; Nam, Dongkyung; Park, Du-Sik

    2015-09-01

    Light-field displays are good candidates in the field of glasses-free 3D display for showing real 3D images without decreasing the image resolution. Light-field displays can create light rays using a large number of projectors in order to express the natural 3D images. However, in light-field displays using multi-projectors, the compensation is very critical due to different characteristics and arrangement positions of each projector. In this paper, we present an enhanced 55- inch, 100-Mpixel multi-projection 3D display consisting of 96 micro projectors for immersive natural 3D viewing in medical and educational applications. To achieve enhanced image quality, color and brightness uniformity compensation methods are utilized along with an improved projector configuration design and a real-time calibration process of projector alignment. For color uniformity compensation, projected images from each projector are captured by a camera arranged in front of the screen, the number of pixels based on RGB color intensities of each captured image is analyzed, and the distributions of RGB color intensities are adjusted by using the respective maximum values of RGB color intensities. For brightness uniformity compensation, each light-field ray emitted from a screen pixel is modeled by a radial basis function, and compensating weights of each screen pixel are calculated and transferred to the projection images by the mapping relationship between the screen and projector coordinates. Finally, brightness compensated images are rendered for each projector. Consequently, the display shows improved color and brightness uniformity, and consistent, exceptional 3D image quality.

  11. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  12. Novel volumetric 3D display based on point light source optical reconstruction using multi focal lens array

    NASA Astrophysics Data System (ADS)

    Lee, Jin su; Lee, Mu young; Kim, Jun oh; Kim, Cheol joong; Won, Yong Hyub

    2015-03-01

    Generally, volumetric 3D display panel produce volume-filling three dimensional images. This paper discusses a volumetric 3D display based on periodical point light sources(PLSs) construction using a multi focal lens array(MFLA). The voxel of discrete 3D images is formed in the air via construction of point light source emitted by multi focal lens array. This system consists of a parallel beam, a spatial light modulator(SLM), a lens array, and a polarizing filter. The multi focal lens array is made with UV adhesive polymer droplet control using a dispersing machine. The MFLA consists of 20x20 circular lens array. Each lens aperture of the MFLA shows 300um on average. The polarizing filter is placed after the SLM and the MFLA to set in phase mostly mode. By the point spread function, the PLSs of the system are located by the focal length of each lens of the MFLA. It can also provide the moving parallax and relatively high resolution. However it has a limit of viewing angle and crosstalk by a property of each lens. In our experiment, we present the letter `C', `O', `DE' and ball's surface with the different depth location. It could be seen clearly that when CCD camera is moved to its position following as transverse axis of the display system. From our result, we expect that varifocal lens like EWOD and LC-lens can be applied for real time volumetric 3D display system.

  13. Comparative study on 3D-2D convertible integral imaging systems

    NASA Astrophysics Data System (ADS)

    Choi, Heejin; Kim, Joohwan; Kim, Yunhee; Lee, Byoungho

    2006-02-01

    In spite of significant improvements in three-dimensional (3D) display fields, the commercialization of a 3D-only display system is not achieved yet. The mainstream of display market is a high performance two-dimensional (2D) flat panel display (FPD) and the beginning of the high-definition (HD) broadcasting accelerates the opening of the golden age of HD FPDs. Therefore, a 3D display system needs to be able to display a 2D image with high quality. In this paper, two different 3D-2D convertible methods based on integral imaging are compared and categorized for its applications. One method uses a point light source array and a polymer-dispersed liquid crystal and one display panel. The other system adopts two display panels and a lens array. The former system is suitable for mobile applications while the latter is for home applications such as monitors and TVs.

  14. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  15. Landmine detection by 3D GPR system

    NASA Astrophysics Data System (ADS)

    Sato, Motoyuki; Yokota, Yuya; Takahashi, Kazunori; Grasmueck, Mark

    2012-06-01

    In order to demonstrate the possibility of Ground Penetrating Radar (GPR) for detection of small buried objects such as landmine and UXO, conducted demonstration tests by using the 3DGPR system, which is a GPR system combined with high accuracy positing system using a commercial laser positioning system (iGPS). iGPS can provide absolute and better than centimetre precise x,y,z coordinates to multiple mine sensors at the same time. The developed " 3DGPR" system is efficient and capable of high-resolution 3D shallow subsurface scanning of larger areas (25 m2 to thousands of square meters) with irregular topography . Field test by using a 500MHz GPR system equipped with 3DGPR system was conducted. PMN-2 and Type-72 mine models have been buried at the depth of 5-20cm in sand. We could demonstrate that the 3DGPR can visualize each of these buried land mines very clearly.

  16. 3D vision upgrade kit for the TALON robot system

    NASA Astrophysics Data System (ADS)

    Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-02-01

    In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.

  17. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care. PMID:25620087

  18. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987

  19. System status display information

    NASA Technical Reports Server (NTRS)

    Summers, L. G.; Erickson, J. B.

    1984-01-01

    The system Status Display is an electronic display system which provides the flight crew with enhanced capabilities for monitoring and managing aircraft systems. Guidelines for the design of the electronic system displays were established. The technical approach involved the application of a system engineering approach to the design of candidate displays and the evaluation of a Hernative concepts by part-task simulation. The system engineering and selection of candidate displays are covered.

  20. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  1. On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV

    NASA Astrophysics Data System (ADS)

    Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.

    2011-03-01

    Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.

  2. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  3. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  4. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  5. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  6. Determination of the optimum viewing distance for a multi-view auto-stereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Park, Inkyu; Kim, Sung-Kyu

    2014-09-22

    We present methodologies for determining the optimum viewing distance (OVD) for a multi-view auto-stereoscopic 3D display system with a parallax barrier. The OVD can be efficiently determined as the viewing distance where statistical deviation of centers of quasi-linear distributions of illuminance at central viewing zones is minimized using local areas of a display panel. This method can offer reduced computation time because it does not use the entire area of the display panel during a simulation, but still secures considerable accuracy. The method is verified in experiments, showing its applicability for efficient optical characterization. PMID:25321731

  7. Coarse integral holography approach for real 3D color video displays.

    PubMed

    Chen, J S; Smithwick, Q Y J; Chu, D P

    2016-03-21

    A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components. PMID:27136858

  8. Tomographic system for 3D temperature reconstruction

    NASA Astrophysics Data System (ADS)

    Antos, Martin; Malina, Radomir

    2003-11-01

    The novel laboratory system for the optical tomography is used to obtain three-dimensional temperature field around a heated element. The Mach-Zehnder holographic interferometers with diffusive illumination of the phase object provide the possibility to scan of multidirectional holographic interferograms in the range of viewing angles from 0 deg to 108 deg. These interferograms form the input data for the computer tomography of the 3D distribution of the refractive index variation, which characterizes the physical state of the studied medium. The configuration of the system allows automatic projection scanning of the studied phase object. The computer calculates the wavefront deformation for each projection, making use of different methods of Fourier-transform and phase-sampling evaluations. The experimental set-up together with experimental results is presented.

  9. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  10. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  11. NGT-3D: a simple nematode cultivation system to study Caenorhabditis elegans biology in 3D

    PubMed Central

    Lee, Tong Young; Yoon, Kyoung-hye; Lee, Jin Il

    2016-01-01

    ABSTRACT The nematode Caenorhabditis elegans is one of the premier experimental model organisms today. In the laboratory, they display characteristic development, fertility, and behaviors in a two dimensional habitat. In nature, however, C. elegans is found in three dimensional environments such as rotting fruit. To investigate the biology of C. elegans in a 3D controlled environment we designed a nematode cultivation habitat which we term the nematode growth tube or NGT-3D. NGT-3D allows for the growth of both nematodes and the bacteria they consume. Worms show comparable rates of growth, reproduction and lifespan when bacterial colonies in the 3D matrix are abundant. However, when bacteria are sparse, growth and brood size fail to reach levels observed in standard 2D plates. Using NGT-3D we observe drastic deficits in fertility in a sensory mutant in 3D compared to 2D, and this defect was likely due to an inability to locate bacteria. Overall, NGT-3D will sharpen our understanding of nematode biology and allow scientists to investigate questions of nematode ecology and evolutionary fitness in the laboratory. PMID:26962047

  12. Stereoscopic contents authoring system for 3D DMB data service

    NASA Astrophysics Data System (ADS)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  13. 3D detection of obstacle distribution in walking guide system for the blind

    NASA Astrophysics Data System (ADS)

    Yoon, Myoung-Jong; Yu, Kee-Ho

    2007-12-01

    In this paper, the concept of a walking guide system with tactile display is introduced, and experiments of 3-D obstacle detection and tactile perception are carried out and analyzed. The algorithm of 3-D obstacle detection and the method of mapping the generated obstacle map and the tactile display device for the walking guide system are proposed. The experiment of the 3-D detection for the obstacle position using ultrasonic sensors is performed and estimated. Some design guidelines for a tactile display device that can display obstacle distribution is discussed.

  14. Characterizing the effects of droplines on target acquisition performance on a 3-D perspective display

    NASA Technical Reports Server (NTRS)

    Liao, Min-Ju; Johnson, Walter W.

    2004-01-01

    The present study investigated the effects of droplines on target acquisition performance on a 3-D perspective display in which participants were required to move a cursor into a target cube as quickly as possible. Participants' performance and coordination strategies were characterized using both Fitts' law and acquisition patterns of the 3 viewer-centered target display dimensions (azimuth, elevation, and range). Participants' movement trajectories were recorded and used to determine movement times for acquisitions of the entire target and of each of its display dimensions. The goodness of fit of the data to a modified Fitts function varied widely among participants, and the presence of droplines did not have observable impacts on the goodness of fit. However, droplines helped participants navigate via straighter paths and particularly benefited range dimension acquisition. A general preference for visually overlapping the target with the cursor prior to capturing the target was found. Potential applications of this research include the design of interactive 3-D perspective displays in which fast and accurate selection and manipulation of content residing at multiple ranges may be a challenge.

  15. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  16. Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang

    2014-08-01

    We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.

  17. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  18. High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display.

    PubMed

    Chang, Yu-Cheng; Jen, Tai-Hsiang; Ting, Chih-Hung; Huang, Yi-Pai

    2014-02-10

    A 2D/3D switchable and rotatable autostereoscopic display using a high-resistance liquid-crystal (Hi-R LC) lens array is investigated in this paper. Using high-resistance layers in an LC cell, a gradient electric-field distribution can be formed, which can provide a better lens-like shape of the refractive-index distribution. The advantages of the Hi-R LC lens array are its 2D/3D switchability, rotatability (in the horizontal and vertical directions), low driving voltage (~2 volts) and fast response (~0.6 second). In addition, the Hi-R LC lens array requires only a very simple fabrication process. PMID:24663563

  19. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  20. Sound localization with head movement: implications for 3-d audio displays

    PubMed Central

    McAnally, Ken I.; Martin, Russell L.

    2014-01-01

    Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy. PMID:25161605

  1. System status display evaluation

    NASA Technical Reports Server (NTRS)

    Summers, Leland G.

    1988-01-01

    The System Status Display is an electronic display system which provides the crew with an enhanced capability for monitoring and managing the aircraft systems. A flight simulation in a fixed base cockpit simulator was used to evaluate alternative design concepts for this display system. The alternative concepts included pictorial versus alphanumeric text formats, multifunction versus dedicated controls, and integration of the procedures with the system status information versus paper checklists. Twelve pilots manually flew approach patterns with the different concepts. System malfunctions occurred which required the pilots to respond to the alert by reconfiguring the system. The pictorial display, the multifunction control interfaces collocated with the system display, and the procedures integrated with the status information all had shorter event processing times and lower subjective workloads.

  2. Interactive photogrammetric system for mapping 3D objects

    NASA Astrophysics Data System (ADS)

    Knopp, Dave E.

    1990-08-01

    A new system, FOTO-G, has been developed for 3D photogrammetric applications. It is a production-oriented software system designed to work with highly unconventional photogrammetric image configurations which result when photographing 3D objects. A demonstration with imagery from an actual 3D-mapping project is reported.

  3. Color decomposition method for multiprimary display using 3D-LUT in linearized LAB space

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Woo; Kim, Yun-Tae; Cho, Yang-Ho; Park, Kee-Hyon; Choe, Wonhee; Ha, Yeong-Ho

    2005-01-01

    This paper proposes a color decomposition method for a multi-primary display (MPD) using a 3-dimensional look-up-table (3D-LUT) in linearized LAB space. The proposed method decomposes the conventional three primary colors into multi-primary control values for a display device under the constraints of tristimulus matching. To reproduce images on an MPD, the color signals are estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, linearized LAB space is used due to its linearity and additivity in color conversion. First, the proposed method constructs a 3-D LUT containing gamut boundary information to calculate the color signals for the MPD in linearized LAB space. For the image reproduction, standard RGB or CIEXYZ is transformed to linearized LAB, then the hue and chroma are computed with reference to the 3D-LUT. In linearized LAB space, the color signals for a gamut boundary point are calculated to have the same lightness and hue as the input point. Also, the color signals for a point on the gray axis are calculated to have the same lightness as the input point. Based on the gamut boundary points and input point, the color signals for the input point are then obtained using the chroma ratio divided by the chroma of the gamut boundary point. In particular, for a change of hue, the neighboring boundary points are also employed. As a result, the proposed method guarantees color signal continuity and computational efficiency, and requires less memory.

  4. Color decomposition method for multiprimary display using 3D-LUT in linearized LAB space

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Woo; Kim, Yun-Tae; Cho, Yang-Ho; Park, Kee-Hyon; Choe, Wonhee; Ha, Yeong-Ho

    2004-12-01

    This paper proposes a color decomposition method for a multi-primary display (MPD) using a 3-dimensional look-up-table (3D-LUT) in linearized LAB space. The proposed method decomposes the conventional three primary colors into multi-primary control values for a display device under the constraints of tristimulus matching. To reproduce images on an MPD, the color signals are estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, linearized LAB space is used due to its linearity and additivity in color conversion. First, the proposed method constructs a 3-D LUT containing gamut boundary information to calculate the color signals for the MPD in linearized LAB space. For the image reproduction, standard RGB or CIEXYZ is transformed to linearized LAB, then the hue and chroma are computed with reference to the 3D-LUT. In linearized LAB space, the color signals for a gamut boundary point are calculated to have the same lightness and hue as the input point. Also, the color signals for a point on the gray axis are calculated to have the same lightness as the input point. Based on the gamut boundary points and input point, the color signals for the input point are then obtained using the chroma ratio divided by the chroma of the gamut boundary point. In particular, for a change of hue, the neighboring boundary points are also employed. As a result, the proposed method guarantees color signal continuity and computational efficiency, and requires less memory.

  5. 3-D Mesh Generation Nonlinear Systems

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surfacemore » equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.« less

  6. 3D Viewer Platform of Cloud Clustering Management System: Google Map 3D

    NASA Astrophysics Data System (ADS)

    Choi, Sung-Ja; Lee, Gang-Soo

    The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].

  7. Artifact reduction in lenticular multiscopic 3D displays by means of anti-alias filtering

    NASA Astrophysics Data System (ADS)

    Konrad, Janusz; Agniel, Philippe

    2003-05-01

    This paper addresses the issue of artifact visibility in automultiscopic 3-D lenticular displays. A straightforward extension of the two-view lenticular autostereoscopic principle to M views results in an M-fold loss of horizontal resolution due to the subsampling needed to properly multiplex the views. In order to circumvent the imbalance between the horizontal and vertical resolution, a tilt can be applied to the lenticules to orient them at a small angle to the vertical direction, as is done in the SynthaGram display from Stereographics Corp. In either case, to avoid aliasing the subsampling should be preceded by suitable lowpass pre-filtering. Although for purely vertical lenticules a sufficiently narrowband lowpass horizontal filtering suffices, the situation is more complicated for diagonal lenticules; the subsampling of each view is no more orthogonal, and more complex sampling models need to be considered. Based on multidimensional sampling theory, we have studied multiview sampling models based on lattices. These models approximate pixel positions on a lenticular automultiscopic display and lead to optimal anti-alias filters. In this paper, we report results for a separable approximation to non-separable 2-D anti-alias filters based on the assumption that the lenticule slant is small. We have carried out experiments on a variety of images, and different filter bandwidths. We have observed that the theoretically-optimal bandwidth is too restrictive; aliasing artifacts disappear, but some image details are lost as well. Somewhat wider bandwidths result in images with almost no aliasing and largely preserved detail. For subjectively-optimized filters, the improvements, although localized, are clear and enhance the 3-D viewing experience.

  8. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance

    NASA Astrophysics Data System (ADS)

    Qiu, Jimmy; Hope, Andrew J.; Cho, B. C. John; Sharpe, Michael B.; Dickie, Colleen I.; DaCosta, Ralph S.; Jaffray, David A.; Weersink, Robert A.

    2012-10-01

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ˜2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  9. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  10. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  11. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  12. Hybrid Reactor Simulation and 3-D Information Display of BWR Out-of-Phase Oscillation

    SciTech Connect

    Edwards, Robert; Huang, Zhengyu

    2001-06-17

    The real-time hybrid reactor simulation (HRS) capability of the Penn State TRIGA reactor has been expanded for boiling water reactor (BWR) out-of-phase behavior. During BWR out-of-phase oscillation half of the core can significantly oscillate out of phase with the other half, while the average power reported by the neutronic instrumentation may show a much lower amplitude for the oscillations. A description of the new HRS is given; three computers are employed to handle all the computations required, including real-time data processing and graph generation. BWR out-of-phase oscillation was successfully simulated. By adjusting the reactivity feedback gains from boiling channels to the TRIGA reactor and to the first harmonic mode power simulation, limit cycle can be generated with both reactor power and the simulated first harmonic power. A 3-D display of spatial power distributions of fundamental mode, first harmonic, and total powers over the reactor cross section is shown.

  13. Assessment of 3D Viewers for the Display of Interactive Documents in the Learning of Graphic Engineering

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia

    2012-01-01

    The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…

  14. Priority depth fusion for the 2D to 3D conversion system

    NASA Astrophysics Data System (ADS)

    Chang, Yu-Lin; Chen, Wei-Yin; Chang, Jing-Ying; Tsai, Yi-Min; Lee, Chia-Lin; Chen, Liang-Gee

    2008-02-01

    For the sake of providing 3D contents for up-coming 3D display devices, a real-time automatic depth fusion 2D-to-3D conversion system is needed on the home multimedia platform. We proposed a priority depth fusion algorithm with a 2D-to-3D conversion system which generates the depth map from most of the commercial video sequences. The results from different kinds of depth reconstruction methods are integrated into one depth map by the proposed priority depth fusion algorithm. Then the depth map and the original 2D image are converted to stereo images for showing on the 3D display devices. In this paper, a 2D-to-3D conversion algorithm set is combined with the proposed depth fusion algorithm to show the improved results. With the converted 3D contents, the needs for 3D display devices will also increase. As long as the two technologies evolve, the 3D-TV era will come as soon as possible.

  15. Memory usage reduction and intensity modulation for 3D holographic display using non-uniformly sampled computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Zhang, Zhao; Liu, Juan; Jia, Jia; Li, Xin; Pan, Yijie; Han, Jian; Hu, Bin; Wang, Yongtian

    2013-12-01

    The real-time holographic display encounters heavy computational load of computer-generated holograms and precisely intensity modulation of 3D images reconstructed by phase-only holograms. In this study, we demonstrate a method for reducing memory usage and modulating the intensity in 3D holographic display. The proposed method can eliminate the redundant information of holograms by employing the non-uniform sampling technique. By combining with the novel look-up table method, 70% reduction in the storage amount can be reached. The gray-scale modulation of 3D images reconstructed by phase-only holograms can be extended either. We perform both numerical simulations and optical experiments to verify the practicability of this method, and the results match well with each other. It is believed that the proposed method can be used in 3D dynamic holographic display and design of the diffractive phase elements.

  16. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  17. Vertically dispersive holographic screens and autostereoscopic displays in 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Magalhães, Lucas V. B.; Llovera, Juan J.; Li, Li M.

    2011-05-01

    In this work we describe a setup employed for the recording of vertical dispersive holographic screens that can be used for medical applications. We show how to obtain holographic screens with areas up to 1200 cm2, focal length of 25+/-2 cm and diffraction efficiency of 7.2%. We analyze the technique employed and the holographic screens obtained. Using this screen we describe a setup for the projection of Magnetic Resonance or Tomographic Images. We also describe and present the first results of an autostereoscopic system for 3D medical imaging.

  18. Arctic Research Mapping Application 3D Geobrowser: Accessing and Displaying Arctic Information From the Desktop to the Web

    NASA Astrophysics Data System (ADS)

    Johnson, G. W.; Gonzalez, J.; Brady, J. J.; Gaylord, A.; Manley, W. F.; Cody, R.; Dover, M.; Score, R.; Garcia-Lavigne, D.; Tweedie, C. E.

    2009-12-01

    ARMAP 3D allows users to dynamically interact with information about U.S. federally funded research projects in the Arctic. This virtual globe allows users to explore data maintained in the Arctic Research & Logistics Support System (ARLSS) database providing a very valuable visual tool for science management and logistical planning, ascertaining who is doing what type of research and where. Users can “fly to” study sites, view receding glaciers in 3D and access linked reports about specific projects. Custom “Search” tasks have been developed to query by researcher name, discipline, funding program, place names and year and display results on the globe with links to detailed reports. ARMAP 3D was created with ESRI’s free ArcGIS Explorer (AGX) new build 900 providing an updated application from build 500. AGX applications provide users the ability to integrate their own spatial data on various data layers provided by ArcOnline (http://resources.esri.com/arcgisonlineservices). Users can add many types of data including OGC web services without any special data translators or costly software. ARMAP 3D is part of the ARMAP suite (http://armap.org), a collection of applications that support Arctic science tools for users of various levels of technical ability to explore information about field-based research in the Arctic. ARMAP is funded by the National Science Foundation Office of Polar Programs Arctic Sciences Division and is a collaborative development effort between the Systems Ecology Lab at the University of Texas at El Paso, Nuna Technologies, the INSTAAR QGIS Laboratory, and CH2M HILL Polar Services.

  19. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  20. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  1. Colorful holographic display of 3D object based on scaled diffraction by using non-uniform fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Chang, Chenliang; Xia, Jun; Lei, Wei

    2015-03-01

    We proposed a new method to calculate the color computer generated hologram of three-dimensional object in holographic display. The three-dimensional object is composed of several tilted planes which are tilted from the hologram. The diffraction from each tilted plane to the hologram plane is calculated based on the coordinate rotation in Fourier spectrum domains. We used the nonuniform fast Fourier transformation (NUFFT) to calculate the nonuniform sampled Fourier spectrum on the tilted plane after coordinate rotation. By using the NUFFT, the diffraction calculation from tilted plane to the hologram plane with variable sampling rates can be achieved, which overcomes the sampling restriction of FFT in the conventional angular spectrum based method. The holograms of red, green and blue component of the polygon-based object are calculated separately by using our NUFFT based method. Then the color hologram is synthesized by placing the red, green and blue component hologram in sequence. The chromatic aberration caused by the wavelength difference can be solved effectively by restricting the sampling rate of the object in the calculation of each wavelength component. The computer simulation shows the feasibility of our method in calculating the color hologram of polygon-based object. The 3D object can be displayed in color with adjustable size and no chromatic aberration in holographic display system, which can be considered as an important application in the colorful holographic three-dimensional display.

  2. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  3. Drivers license display system

    NASA Astrophysics Data System (ADS)

    Prokoski, Francine J.

    1997-01-01

    Carjackings are only one of a growing class of law enforcement problems associated with increasingly violent crimes and accidents involving automobiles plays weapons, drugs and alcohol. Police traffic stops have become increasingly dangerous, with an officer having no information about a vehicle's potentially armed driver until approaching him. There are 15 million alcoholics in the US and 90 percent of them have drivers licenses. Many of them continue driving even after their licenses have ben revoked or suspended. There are thousands of unlicensed truck drivers in the country, and also thousands who routinely exceed safe operating periods without rest; often using drugs in an attempt to stay alert. MIKOS has developed the Drivers License Display Systems to reduce these and other related risks. Although every state requires the continuous display of vehicle registration information on every vehicle using public roads, no state yet requires the display of driver license information. The technology exists to provide that feature as an add-on to current vehicles for nominal cost. An initial voluntary market is expected to include: municipal, rental, and high value vehicles which are most likely to be mis-appropriated. It is anticipated that state regulations will eventually require such systems in the future, beginning with commercial vehicles, and then extending to high risk drivers and eventually all vehicles. The MIKOS system offers a dual-display approach which can be deployed now, and which will utilize all existing state licenses without requiring standardization.

  4. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©. PMID:26921815

  5. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    PubMed Central

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-01-01

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910

  6. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  7. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  8. 3D sensors and micro-fabricated detector systems

    NASA Astrophysics Data System (ADS)

    Da Vià, Cinzia

    2014-11-01

    Micro-systems based on the Micro Electro Mechanical Systems (MEMS) technology have been used in miniaturized low power and low mass smart structures in medicine, biology and space applications. Recently similar features found their way inside high energy physics with applications in vertex detectors for high-luminosity LHC Upgrades, with 3D sensors, 3D integration and efficient power management using silicon micro-channel cooling. This paper reports on the state of this development.

  9. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination

  10. Progress in off-plane computer-generated waveguide holography for near-to-eye 3D display

    NASA Astrophysics Data System (ADS)

    Jolly, Sundeep; Savidis, Nickolaos; Datta, Bianca; Bove, V. Michael; Smalley, Daniel

    2016-03-01

    Waveguide holography refers to the use of holographic techniques for the control of guided-wave light in integrated optical devices (e.g., off-plane grating couplers and in-plane distributed Bragg gratings for guided-wave optical filtering). Off-plane computer-generated waveguide holography (CGWH) has also been employed in the generation of simple field distributions for image display. We have previously depicted the design and fabrication of a binary-phase CGWH operating in the Raman-Nath regime for the purposes of near-to-eye 3-D display and as a precursor to a dynamic, transparent flat-panel guided-wave holographic video display. In this paper, we describe design algorithms and fabrication techniques for multilevel phase CGWHs for near-to-eye 3-D display.

  11. A Primitive-Based 3D Object Recognition System

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  12. Enhanced perception of terrain hazards in off-road path choice: stereoscopic 3D versus 2D displays

    NASA Astrophysics Data System (ADS)

    Merritt, John O.; CuQlock-Knopp, V. Grayson; Myles, Kimberly

    1997-06-01

    Off-road mobility at night is a critical factor in modern military operations. Soldiers traversing off-road terrain, both on foot and in combat vehicles, often use 2D viewing devices (such as a driver's thermal viewer, or biocular or monocular night-vision goggles) for tactical mobility under low-light conditions. Perceptual errors can occur when 2D displays fail to convey adequately the contours of terrain. Some off-road driving accidents have been attributed to inadequate perception of terrain features due to using 2D displays (which do not provide binocular-parallax cues to depth perception). In this study, photographic images of terrain scenes were presented first in conventional 2D video, and then in stereoscopic 3D video. The percentage of possible correct answers for 2D and 3D were: 2D pretest equals 52%, 3D pretest equals 80%, 2D posttest equals 48%, 3D posttest equals 78%. Other recent studies conducted at the US Army Research Laboratory's Human Research and Engineering Directorate also show that stereoscopic 3D displays can significantly improve visual evaluation of terrain features, and thus may improve the safety and effectiveness of military off-road mobility operation, both on foot and in combat vehicles.

  13. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  14. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  15. Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays.

    PubMed

    Takaki, Yasuhiro; Urano, Yohei; Nishio, Hiroyuki

    2012-11-19

    The discontinuity of motion parallax offered by multi-view displays was assessed by subjective evaluation. A super multi-view head-up display, which provides dense viewing points and has short-, medium-, and long-distance display ranges, was used. The results showed that discontinuity perception depended on the ratio of an image shift between adjacent parallax images to a pixel pitch of three-dimensional (3D) images and the crosstalk between viewing points. When the ratio was less than 0.2 and the crosstalk was small, the discontinuity was not perceived. When the ratio was greater than 1 and the crosstalk was small, the discontinuity was perceived, and the resolution of the 3D images decreased twice. When the crosstalk was large, the discontinuity was not perceived even when the ratio was 1 or 2. However, the resolution decreased two or more times. PMID:23187574

  16. Seamless tiled display system

    NASA Technical Reports Server (NTRS)

    Dubin, Matthew B. (Inventor); Larson, Brent D. (Inventor); Kolosowsky, Aleksandra (Inventor)

    2006-01-01

    A modular and scalable seamless tiled display apparatus includes multiple display devices, a screen, and multiple lens assemblies. Each display device is subdivided into multiple sections, and each section is configured to display a sectional image. One of the lens assemblies is optically coupled to each of the sections of each of the display devices to project the sectional image displayed on that section onto the screen. The multiple lens assemblies are configured to merge the projected sectional images to form a single tiled image. The projected sectional images may be merged on the screen by magnifying and shifting the images in an appropriate manner. The magnification and shifting of these images eliminates any visual effect on the tiled display that may result from dead-band regions defined between each pair of adjacent sections on each display device, and due to gaps between multiple display devices.

  17. Full-color interactive holographic projection system for large 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Leister, Norbert; Schwerdtner, Armin; Fütterer, Gerald; Buschbeck, Steffen; Olaya, Jean-Christophe; Flon, Stanislas

    2008-02-01

    Dependence on sub-micron pixel pitch and super-computing have prohibited practical solutions for large size holographic displays until recently. SeeReal Technologies has developed a new approach to holographic displays significantly reducing these requirements. This concept is applicable to large "direct view" holographic displays as well as to projection designs. Principles, advantages and selected solutions for holographic projection systems will be explained. Based on results from practical prototypes, advantageous new features, as large size full-color real-time holographic 3D scenes generated at high frame rates on micro displays with state of the art resolution will be presented.

  18. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  19. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  20. A series of new lanthanide fumarates displaying three types of 3-D frameworks.

    PubMed

    Tan, Xiao-Feng; Zhou, Jian; Fu, Lianshe; Xiao, Hong-Ping; Zou, Hua-Hong; Tang, Qiuling

    2016-03-28

    A series of lanthanide fumarates [Sm2(fum)3(H2fum)(H2O)2] (1, H2fum = fumaric acid), [Ln2(fum)3-(H2O)4]·3H2O {Ln = Tb (2a), Dy (2b)} and [Ln2(fum)3(H2O)4] {Ln = Y (3a), Ho (3b), Er (3c), Tm (3d)} were prepared by the hydrothermal method and their structures were classified into three types. The 3-D framework of compound 1 contains a 1-D infinite [Sm-O-Sm]n chain built up from the connection of SmO8(H2O) polyhedra sharing edges via three -COO group bridges of fumarate ligands, which is further constructed into a 3-D network structure with three kinds of fumarate ligands. Compounds 2a-b are isostructural and consist of a 3-D porous framework with 0-D cavities for the accommodation of chair-like hexameric (H2O)6 clusters. Compounds 3a-d are isostructural and have a 3-D network structure remarkably different from those of 1 and 2a-b, due to the different coordination numbers for the Ln(3+) ions and distinct fumarate ligand bridging patterns. A systematic investigation of seven lanthanide fumarates and five reported compounds revealed that the well-known lanthanide contraction has a significant influence on the formation of lanthanide fumarates. The magnetic properties of compounds 1, 2b and 3b-3d were also investigated. PMID:26894939

  1. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  2. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  3. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  4. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size. PMID:25961382

  5. 3D ultrasound image guidance system used in RF uterine adenoma and uterine bleeding ablation system

    NASA Astrophysics Data System (ADS)

    Ding, Mingyue; Luo, Xiaoan; Cai, Chao; Zhou, Chengping; Fenster, Aaron

    2006-03-01

    Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese women. Many women lose their fertility from these diseases. Currently, a minimally invasive ablation system using an RF button electrode is being used in Chinese hospitals to destroy tumor cells or stop bleeding. In this paper, we report on a 3D US guidance system developed to avoid accidents or death of the patient by inaccurate localization of the tumor position during treatment. A 3D US imaging system using a rotational scanning approach of an abdominal probe was built. In order to reduce the distortion produced when the rotational axis is not collinear with the central beam of the probe, a new 3D reconstruction algorithm is used. Then, a fast 3D needle segmentation algorithm is used to find the electrode. Finally, the tip of electrode is determined along the segmented 3D needle and the whole electrode is displayed. Experiments with a water phantom demonstrated the feasibility of our approach.

  6. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  7. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  8. 3D multi-spectrum sensor system with face recognition.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  9. Cytoplasmic bacteriophage display system

    DOEpatents

    Studier, F.W.; Rosenberg, A.H.

    1998-06-16

    Disclosed are display vectors comprising DNA encoding a portion of a structural protein from a cytoplasmic bacteriophage, joined covalently to a protein or peptide of interest. Exemplified are display vectors wherein the structural protein is the T7 bacteriophage capsid protein. More specifically, in the exemplified display vectors the C-terminal amino acid residue of the portion of the capsid protein is joined to the N-terminal residue of the protein or peptide of interest. The portion of the T7 capsid protein exemplified comprises an N-terminal portion corresponding to form 10B of the T7 capsid protein. The display vectors are useful for high copy number display or lower copy number display (with larger fusion). Compositions of the type described herein are useful in connection with methods for producing a virus displaying a protein or peptide of interest. 1 fig.

  10. Cytoplasmic bacteriophage display system

    DOEpatents

    Studier, F. William; Rosenberg, Alan H.

    1998-06-16

    Disclosed are display vectors comprising DNA encoding a portion of a structural protein from a cytoplasmic bacteriophage, joined covalently to a protein or peptide of interest. Exemplified are display vectors wherein the structural protein is the T7 bacteriophage capsid protein. More specifically, in the exemplified display vectors the C-terminal amino acid residue of the portion of the capsid protein is joined to the N-terminal residue of the protein or peptide of interest. The portion of the T7 capsid protein exemplified comprises an N-terminal portion corresponding to form 10B of the T7 capsid protein. The display vectors are useful for high copy number display or lower copy number display (with larger fusion). Compositions of the type described herein are useful in connection with methods for producing a virus displaying a protein or peptide of interest.

  11. 3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation.

    PubMed

    Yeom, Han-Ju; Kim, Hee-Jae; Kim, Seong-Bok; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo; Park, Jae-Hyeung

    2015-12-14

    We propose a bar-type three-dimensional holographic head mounted display using two holographic optical elements. Conventional stereoscopic head mounted displays may suffer from eye fatigue because the images presented to each eye are two-dimensional ones, which causes mismatch between the accommodation and vergence responses of the eye. The proposed holographic head mounted display delivers three-dimensional holographic images to each eye, removing the eye fatigue problem. In this paper, we discuss the configuration of the bar-type waveguide head mounted displays and analyze the aberration caused by the non-symmetric diffraction angle of the holographic optical elements which are used as input and output couplers. Pre-distortion of the hologram is also proposed in the paper to compensate the aberration. The experimental results show that proposed head mounted display can present three-dimensional see-through holographic images to each eye with correct focus cues. PMID:26698993

  12. The Maintenance Of 3-D Scene Databases Using The Analytical Imagery Matching System (Aims)

    NASA Astrophysics Data System (ADS)

    Hovey, Stanford T.

    1987-06-01

    The increased demand for multi-resolution displays of simulated scene data for aircraft training or mission planning has led to a need for digital databases of 3-dimensional topography and geographically positioned objects. This data needs to be at varying resolutions or levels of detail as well as be positionally accurate to satisfy close-up and long distance scene views. The generation and maintenance processes for this type of digital database requires that relative and absolute spatial positions of geographic and cultural features be carefully controlled in order for the scenes to be representative and useful for simulation applications. Autometric, Incorporated has designed a modular Analytical Image Matching System (AIMS) which allows digital 3-D terrain feature data to be derived from cartographic and imagery sources by a combination of automatic and man-machine techniques. This system provides a means for superimposing the scenes of feature information in 3-D over imagery for updating. It also allows for real-time operator interaction between a monoscopic digital imagery display, a digital map display, a stereoscopic digital imagery display and automatically detected feature changes for transferring 3-D data from one coordinate system's frame of reference to another for updating the scene simulation database. It is an advanced, state-of-the-art means for implementing a modular, 3-D scene database maintenance capability, where original digital or converted-to-digital analog source imagery is used as a basic input to perform accurate updating.

  13. Holographic display of real existing objects from their 3D Fourier spectrum

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Sando, Yusuke

    2005-02-01

    A method for synthesizing computer-generated holograms of real-existing objects is described. A series of projection images are recorded both vertically and horizontally with an incoherent light source and a color CCD camera. According to the principle of computer tomography(CT), the 3-D Fourier spectrum is calculated from several projection images of objects and the Fresnel computer-generated hologram(CGH) is synthesized using a part of the 3-D Fourier spectrum. This method has following advantages. At first, no-blur reconstructed images in any direction are obtained owing to two-dimensionally scanning in recording. Secondarily, since not interference fringes but simple projection images of objects are recorded, a coherent light source is not necessary for recording. The use of a color CCD in recording enables us to record and reconstruct colorful objects. Finally, we demonstrate color reconstruction of objects both numerically and optically.

  14. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  15. An Optically Controlled 3D Cell Culturing System

    PubMed Central

    Ishii, Kelly S.; Hu, Wenqi; Namekar, Swapnil A.; Ohta, Aaron T.

    2012-01-01

    A novel 3D cell culture system was developed and tested. The cell culture device consists of a microfluidic chamber on an optically absorbing substrate. Cells are suspended in a thermoresponsive hydrogel solution, and optical patterns are utilized to heat the solution, producing localized hydrogel formation around cells of interest. The hydrogel traps only the desired cells in place while also serving as a biocompatible scaffold for supporting the cultivation of cells in 3D. This is demonstrated with the trapping of MDCK II and HeLa cells. The light intensity from the optically induced hydrogel formation does not significantly affect cell viability. PMID:22701475

  16. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  17. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  18. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  19. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    NASA Astrophysics Data System (ADS)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  20. Effective declutter of complex flight displays using stereoptic 3-D cueing

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.; Nold, Dean E.

    1994-01-01

    The application of stereo technology to new, integrated pictorial display formats has been effective in situational awareness enhancements, and stereo has been postulated to be effective for the declutter of complex informational displays. This paper reports a full-factorial workstation experiment performed to verify the potential benefits of stereo cueing for the declutter function in a simulated tracking task. The experimental symbology was designed similar to that of a conventional flight director, although the format was an intentionally confused presentation that resulted in a very cluttered dynamic display. The subject's task was to use a hand controller to keep a tracking symbol, an 'X', on top of a target symbol, another X, which was being randomly driven. In the basic tracking task, both the target symbol and the tracking symbol were presented as red X's. The presence of color coding was used to provide some declutter, thus making the task more reasonable to perform. For this condition, the target symbol was coded red, and the tracking symbol was coded blue. Noise conditions, or additional clutter, were provided by the inclusion of randomly moving, differently colored X symbols. Stereo depth, which was hypothesized to declutter the display, was utilized by placing any noise in a plane in front of the display monitor, the tracking symbol at screen depth, and the target symbol behind the screen. The results from analyzing the performances of eight subjects revealed that the stereo presentation effectively offsets the cluttering effects of both the noise and the absence of color coding. The potential of stereo cueing to declutter complex informational displays has therefore been verified; this ability to declutter is an additional benefit from the application of stereoptic cueing to pictorial flight displays.

  1. HDTV single camera 3D system and its application in microsurgery

    NASA Astrophysics Data System (ADS)

    Mochizuki, Ryo; Kobayashi, Shigeaki

    1994-04-01

    A 3D high-definition television (HDTV) system suitable for attachment to a stereoscopic operating microscope allowing 3D medical documentation using a single HDTV camera and monitor is described. The system provides 3D HDTV microneurosurgical recorded images suitable for viewing on a screen or monitor, or for printing. Visual documentation using a television and video system is very important in modern medical practice, especially for the eduction of medical students, the training of residents, and the display of records in medical conferences. For the documentation of microsurgery and endoscopic surgery, the video system is essential. The printed images taken from the recording by the HDTV system of the illustrative case clearly demonstrate the high quality and definition achieved, which are comparable to that of the 35 mm movie film. As the system only requires a single camera and recorder, the cost performance and size make it very suitable for microsurgical and endoscopic documentation.

  2. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  3. Precise Animated 3-D Displays Of The Heart Constructed From X-Ray Scatter Fields

    NASA Astrophysics Data System (ADS)

    McInerney, J. J.; Herr, M. D.; Copenhaver, G. L.

    1986-01-01

    A technique, based upon the interrogation of x-ray scatter, has been used to construct precise animated displays of the three-dimensional surface of the heart throughout the cardiac cycle. With the selection of motion amplification, viewing orientation, beat rate, and repetitive playbacks of isolated segments of the cardiac cycle, these displays are used to directly visualize epicardial surface velocity and displacement patterns, to construct regional maps of old or new myocardial infarction, and to visualize diastolic stiffening of the ventricle associated with acute ischemia. The procedure is non-invasive. Cut-downs or injections are not required.

  4. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  5. 3D Geological Model for "LUSI" - a Deep Geothermal System

    NASA Astrophysics Data System (ADS)

    Sohrabi, Reza; Jansen, Gunnar; Mazzini, Adriano; Galvan, Boris; Miller, Stephen A.

    2016-04-01

    Geothermal applications require the correct simulation of flow and heat transport processes in porous media, and many of these media, like deep volcanic hydrothermal systems, host a certain degree of fracturing. This work aims to understand the heat and fluid transport within a new-born sedimentary hosted geothermal system, termed Lusi, that began erupting in 2006 in East Java, Indonesia. Our goal is to develop conceptual and numerical models capable of simulating multiphase flow within large-scale fractured reservoirs such as the Lusi region, with fractures of arbitrary size, orientation and shape. Additionally, these models can also address a number of other applications, including Enhanced Geothermal Systems (EGS), CO2 sequestration (Carbon Capture and Storage CCS), and nuclear waste isolation. Fractured systems are ubiquitous, with a wide-range of lengths and scales, making difficult the development of a general model that can easily handle this complexity. We are developing a flexible continuum approach with an efficient, accurate numerical simulator based on an appropriate 3D geological model representing the structure of the deep geothermal reservoir. Using previous studies, borehole information and seismic data obtained in the framework of the Lusi Lab project (ERC grant n°308126), we present here the first 3D geological model of Lusi. This model is calculated using implicit 3D potential field or multi-potential fields, depending on the geological context and complexity. This method is based on geological pile containing the geological history of the area and relationship between geological bodies allowing automatic computation of intersections and volume reconstruction. Based on the 3D geological model, we developed a new mesh algorithm to create hexahedral octree meshes to transfer the structural geological information for 3D numerical simulations to quantify Thermal-Hydraulic-Mechanical-Chemical (THMC) physical processes.

  6. Advanced system for 3D dental anatomy reconstruction and 3D tooth movement simulation during orthodontic treatment

    NASA Astrophysics Data System (ADS)

    Monserrat, Carlos; Alcaniz-Raya, Mariano L.; Juan, M. Carmen; Grau Colomer, Vincente; Albalat, Salvador E.

    1997-05-01

    This paper describes a new method for 3D orthodontics treatment simulation developed for an orthodontics planning system (MAGALLANES). We develop an original system for 3D capturing and reconstruction of dental anatomy that avoid use of dental casts in orthodontic treatments. Two original techniques are presented, one direct in which data are acquired directly form patient's mouth by mean of low cost 3D digitizers, and one mixed in which data are obtained by 3D digitizing of hydrocollids molds. FOr this purpose we have designed and manufactured an optimized optical measuring system based on laser structured light. We apply these 3D dental models to simulate 3D movement of teeth, including rotations, during orthodontic treatment. The proposed algorithms enable to quantify the effect of orthodontic appliance on tooth movement. The developed techniques has been integrated in a system named MAGALLANES. This original system present several tools for 3D simulation and planning of orthodontic treatments. The prototype system has been tested in several orthodontic clinic with very good results.

  7. Structural description and combined 3D display for superior analysis of cerebral vascularity from MRA

    NASA Astrophysics Data System (ADS)

    Szekely, Gabor; Koller, Thomas; Kikinis, Ron; Gerig, Guido

    1994-09-01

    Medical image analysis has to support the clinicians ability to identify, manipulate and quantify anatomical structures. On scalar 2D image data, a human observer is often superior to computer assisted analysis, but the interpretation of vector- valued data or data combined from different modalities, especially in 3D, can benefit from computer assistance. The problem of how to convey the complex information to the clinician is often tackled by providing colored multimodality renderings. We propose to go a step beyond by supplying a suitable modelling of anatomical and functional structures encoding important shape features and physical properties. The multiple attributes regarding geometry, topology and function are carried by the symbolic description and can be interactively queried and edited. Integrated 3D rendering of object surfaces and symbolic representation acts as a visual interface to allow interactive communication between the observer and the complex data, providing new possibilities for quantification and therapy planning. The discussion is guided by the prototypical example of investigating the cerebral vasculature in MRA volume data. Geometric, topological and flow-related information can be assessed by interactive analysis on a computer workstation, providing otherwise hidden qualitative and quantitative information. Several case studies demonstrate the potential usage for structure identification, definition of landmarks, assessment of topology for catheterization, and local simulation of blood flow.

  8. Description of a 3D display with motion parallax and direct interaction

    NASA Astrophysics Data System (ADS)

    Tu, J.; Flynn, M. F.

    2014-03-01

    We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time.

  9. Measurement system for 3-D foot coordinates and parameters

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Yunhui; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-12-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle and the model of the measurement system were presented. Errors caused by nonlinearity of CCD cameras and caused by installation can be eliminated by using the global calibration method for CCD cameras, which based on nonlinear coordinate mapping function and the optimized method. A local foot coordinate system is defined with the Pternion and the Acropodion extracted from the boundaries of foot projections. The characteristic points can thus be located and foot parameters be extracted automatically by the local foot coordinate system and the related sections. Foot measurements for about 200 participants were conducted and the measurement results for male and female participants were presented. 3-D foot coordinates and parameters measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers.

  10. IGUANA: a high-performance 2D and 3D visualisation system

    NASA Astrophysics Data System (ADS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L. A.

    2004-11-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  11. 3D gel printing for soft-matter systems innovation

    NASA Astrophysics Data System (ADS)

    Furukawa, Hidemitsu; Kawakami, Masaru; Gong, Jin; Makino, Masato; Kabir, M. Hasnat; Saito, Azusa

    2015-04-01

    In the past decade, several high-strength gels have been developed, especially from Japan. These gels are expected to use as a kind of new engineering materials in the fields of industry and medical as substitutes to polyester fibers, which are materials of artificial blood vessels. We consider if various gel materials including such high-strength gels are 3D-printable, many new soft and wet systems will be developed since the most intricate shape gels can be printed regardless of the quite softness and brittleness of gels. Recently we have tried to develop an optical 3D gel printer to realize the free-form formation of gel materials. We named this apparatus Easy Realizer of Soft and Wet Industrial Materials (SWIM-ER). The SWIM-ER will be applied to print bespoke artificial organs, including artificial blood vessels, which will be possibly used for both surgery trainings and actual surgery. The SWIM-ER can print one of the world strongest gels, called Double-Network (DN) gels, by using UV irradiation through an optical fiber. Now we also are developing another type of 3D gel printer for foods, named E-Chef. We believe these new 3D gel printers will broaden the applications of soft-matter gels.

  12. Parameters of the human 3D gaze while observing portable autostereoscopic display: a model and measurement results

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hanhela, Marianne; Gotchev, Atanas; Utirainen, Timo; Jumisko-Pyykkö, Satu; Hannuksela, Miska

    2012-02-01

    We present an approach to measure and model the parameters of human point-of-gaze (PoG) in 3D space. Our model considers the following three parameters: position of the gaze in 3D space, volume encompassed by the gaze and time for the gaze to arrive on the desired target. Extracting the 3D gaze position from binocular gaze data is hindered by three problems. The first problem is the lack of convergence - due to micro saccadic movements the optical lines of both eyes rarely intersect at a point in space. The second problem is resolution - the combination of short observation distance and limited comfort disparity zone typical for a mobile 3D display does not allow the depth of the gaze position to be reliably extracted. The third problem is measurement noise - due to the limited display size, the noise range is close to the range of properly measured data. We have developed a methodology which allows us to suppress most of the measurement noise. This allows us to estimate the typical time which is needed for the point-of-gaze to travel in x, y or z direction. We identify three temporal properties of the binocular PoG. The first is reaction time, which is the minimum time that the vision reacts to a stimulus position change, and is measured as the time between the event and the time the PoG leaves the proximity of the old stimulus position. The second is the travel time of the PoG between the old and new stimulus position. The third is the time-to-arrive, which is the time combining the reaction time, travel time, and the time required for the PoG to settle in the new position. We present the method for filtering the PoG outliers, for deriving the PoG center from binocular eye-tracking data and for calculating the gaze volume as a function of the distance between PoG and the observer. As an outcome from our experiments we present binocular heat maps aggregated over all observers who participated in a viewing test. We also show the mean values for all temporal

  13. 3D printed nervous system on a chip.

    PubMed

    Johnson, Blake N; Lancaster, Karen Z; Hogue, Ian B; Meng, Fanben; Kong, Yong Lin; Enquist, Lynn W; McAlpine, Michael C

    2016-04-21

    Bioinspired organ-level in vitro platforms are emerging as effective technologies for fundamental research, drug discovery, and personalized healthcare. In particular, models for nervous system research are especially important, due to the complexity of neurological phenomena and challenges associated with developing targeted treatment of neurological disorders. Here we introduce an additive manufacturing-based approach in the form of a bioinspired, customizable 3D printed nervous system on a chip (3DNSC) for the study of viral infection in the nervous system. Micro-extrusion 3D printing strategies enabled the assembly of biomimetic scaffold components (microchannels and compartmented chambers) for the alignment of axonal networks and spatial organization of cellular components. Physiologically relevant studies of nervous system infection using the multiscale biomimetic device demonstrated the functionality of the in vitro platform. We found that Schwann cells participate in axon-to-cell viral spread but appear refractory to infection, exhibiting a multiplicity of infection (MOI) of 1.4 genomes per cell. These results suggest that 3D printing is a valuable approach for the prototyping of a customized model nervous system on a chip technology. PMID:26669842

  14. Advancements in 3D Structural Analysis of Geothermal Systems

    SciTech Connect

    Siler, Drew L; Faulds, James E; Mayhew, Brett; McNamara, David

    2013-06-23

    Robust geothermal activity in the Great Basin, USA is a product of both anomalously high regional heat flow and active fault-controlled extension. Elevated permeability associated with some fault systems provides pathways for circulation of geothermal fluids. Constraining the local-scale 3D geometry of these structures and their roles as fluid flow conduits is crucial in order to mitigate both the costs and risks of geothermal exploration and to identify blind (no surface expression) geothermal resources. Ongoing studies have indicated that much of the robust geothermal activity in the Great Basin is associated with high density faulting at structurally complex fault intersection/interaction areas, such as accommodation/transfer zones between discrete fault systems, step-overs or relay ramps in fault systems, intersection zones between faults with different strikes or different senses of slip, and horse-tailing fault terminations. These conceptualized models are crucial for locating and characterizing geothermal systems in a regional context. At the local scale, however, pinpointing drilling targets and characterizing resource potential within known or probable geothermal areas requires precise 3D characterization of the system. Employing a variety of surface and subsurface data sets, we have conducted detailed 3D geologic analyses of two Great Basin geothermal systems. Using EarthVision (Dynamic Graphics Inc., Alameda, CA) we constructed 3D geologic models of both the actively producing Brady’s geothermal system and a ‘greenfield’ geothermal prospect at Astor Pass, NV. These 3D models allow spatial comparison of disparate data sets in 3D and are the basis for quantitative structural analyses that can aid geothermal resource assessment and be used to pinpoint discrete drilling targets. The relatively abundant data set at Brady’s, ~80 km NE of Reno, NV, includes 24 wells with lithologies interpreted from careful analysis of cuttings and core, a 1

  15. Full-hemisphere automatic optical 3D measurement system

    NASA Astrophysics Data System (ADS)

    Kuehmstedt, Peter; Notni, Gunther; Schreiber, Wolfgang; Gerber, Joerg

    1997-09-01

    The measurement of 3D object shapes for the purpose of digitization of CAD-models and for the complete manufacturing control of components are important tasks of modern industrial inspection. The proposed 3D measurement system using structured-light illumination has the ability to avoid illumination-caused difficulties, like shadowing and excessive light intensities by light reflection and diffraction at the surface of the object, while measuring technical surfaces. For this purpose, the object under test is successively illuminated with a periodic grating structure from at least three different directions, using a telecentric projection system. At least three linearly independent phase-measurement values are measured by gray- code techniques to calculate the 3D coordinates of the object points. The experimental setup allows the determination of phase-measurement values with illuminations from up to 16 different directions. This is connected with a simultaneous variation of the intensity of the projected grating structures. Thus, areas of shadows are `shifted' across the object surface to spots where they have no influence on the result of the measurement, and also specular effects can be suppressed. Furthermore, in order to obtain the entire surface, the object to be digitized must be covered by many overlapping views taken from different directions. To view the entire surface, the object is moved into various measuring positions, using a second rotation axis. These views are merged within an object-centered coordinate system and are automatically rearranged into a uniform grid. For this purpose, a calibration procedure has been developed to measure absolute coordinates within a defined object coordinate system, so that the combination of the particular images is simple, because all measurements are performed within the same system of object coordinates. The power of this concept has been experimentally demonstrated, for example, by measuring the complete 3D shape

  16. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  17. Electrowetting-based adaptive vari-focal liquid lens array for 3D display

    NASA Astrophysics Data System (ADS)

    Won, Yong Hyub

    2014-10-01

    Electrowetting is a phenomenon that can control the surface tension of liquid when a voltage is applied. This paper introduces the fabrication method of liquid lens array by the electrowetting phenomenon. The fabricated 23 by 23 lens array has 1mm diameter size with 1.6 mm interval distance between adjacent lenses. The diopter of each lens was - 24~27 operated at 0V to 50V. The lens array chamber fabricated by Deep Reactive-Ion Etching (DRIE) is deposited with IZO and parylene C and tantalum oxide. To prevent water penetration and achieve high dielectric constant, parylene C and tantalum oxide (ɛ = 23 ~ 25) are used respectively. Hydrophobic surface which enables the range of contact angle from 60 to 160 degree is coated to maximize the effect of electrowetting causing wide band of dioptric power. Liquid is injected into each lens chamber by two different ways. First way was self water-oil dosing that uses cosolvent and diffusion effect, while the second way was micro-syringe by the hydrophobic surface properties. To complete the whole process of the lens array fabrication, underwater sealing was performed using UV adhesive that does not dissolve in water. The transient time for changing from concave to convex lens was measured <33ms (at frequency of 1kHz AC voltage.). The liquid lens array was tested unprecedentedly for integral imaging to achieve more advanced depth information of 3D image.

  18. Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling.

    PubMed

    Jia, Jia; Liu, Juan; Jin, Guofan; Wang, Yongtian

    2014-09-20

    Occlusion culling is an important process that produces correct depth cues for observers in holographic displays, whereas current methods suffer from occlusion errors or high computational loads. We propose a fast and effective method for occlusion culling based on multiple light-point sampling planes and an inverse orthographic projection technique. Multiple light-point sampling planes are employed to remove the hidden surfaces for each direction of the view of the three-dimensional (3D) scene by forward orthographic projection, and the inverse orthographic projection technique is used to determine the effective sampling points of the 3D scene. A numerical simulation and an optical experiment are performed. The results show that this approach can realize accurate occlusion effects, smooth motion parallax, and continuous depth using low angular sampling without any extra computation costs. PMID:25322109

  19. 3D temperature field reconstruction using ultrasound sensing system

    NASA Astrophysics Data System (ADS)

    Liu, Yuqian; Ma, Tong; Cao, Chengyu; Wang, Xingwei

    2016-04-01

    3D temperature field reconstruction is of practical interest to the power, transportation and aviation industries and it also opens up opportunities for real time control or optimization of high temperature fluid or combustion process. In our paper, a new distributed optical fiber sensing system consisting of a series of elements will be used to generate and receive acoustic signals. This system is the first active temperature field sensing system that features the advantages of the optical fiber sensors (distributed sensing capability) and the acoustic sensors (non-contact measurement). Signals along multiple paths will be measured simultaneously enabled by a code division multiple access (CDMA) technique. Then a proposed Gaussian Radial Basis Functions (GRBF)-based approach can approximate the temperature field as a finite summation of space-dependent basis functions and time-dependent coefficients. The travel time of the acoustic signals depends on the temperature of the media. On this basis, the Gaussian functions are integrated along a number of paths which are determined by the number and distribution of sensors. The inversion problem to estimate the unknown parameters of the Gaussian functions can be solved with the measured times-of-flight (ToF) of acoustic waves and the length of propagation paths using the recursive least square method (RLS). The simulation results show an approximation error less than 2% in 2D and 5% in 3D respectively. It demonstrates the availability and efficiency of our proposed 3D temperature field reconstruction mechanism.

  20. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  1. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2013-11-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP) with spatial data and query processing capabilities of Geographic Information Systems (GIS), multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP) based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM) and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  2. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  3. The Use Of Computerized Tomographic (CT) Scans For 3-D Display And Prosthesis Construction

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; Woodruff, Tracey J.; Beumer, John

    1985-06-01

    The construction of preformed cranial prostheses for large cranial bony defects is both error prone and time consuming. We discuss a method used for the creation of cranial prostheses from automatically extracted bone contours taken from Computerized Tomographic (CT) scans. Previous methods of prosthesis construction have relied on the making of a mold directly from the region of cranial defect. The use of image processing, bone contour extraction, and three-dimensional display allowed us to create a better fitting prosthesis while reducing patient surgery time. This procedure involves direct bone margin extraction from the digital CT images followed by head model construction from serial plots of the bone margin. Three-dimensional data display is used to verify the integrity of the skull data set prior to model construction. Once created, the model is used to fabricate a custom fitting prosthesis which is then surgically implanted. This procedure is being used with patients in the Maxillofacial Prosthetic Clinic at UCLA and this paper details the technique.

  4. High-power, red-emitting DBR-TPL for possible 3d holographic or volumetric displays

    NASA Astrophysics Data System (ADS)

    Feise, D.; Fiebig, C.; Blume, G.; Pohl, J.; Eppich, B.; Paschke, K.

    2013-03-01

    To create holographic or volumetric displays, it is highly desirable to move from conventional imaging projection displays, where the light is filtered from a constant source towards flying spot, where the correct amount of light is generated for every pixel. The only light sources available for such an approach, which requires visible, high output power with a spatial resolution beyond conventional lamps, are lasers. When adding the market demands for high electro-optical conversion efficiency, direct electrical modulation capability, compactness, reliability and massproduction compliance, this leaves only semiconductor diode lasers. We present red-emitting tapered diode lasers (TPL) emitting a powerful, visible, nearly diffraction limited beam (M²1/e² < 1.5) and a single longitudinal mode, which are well suited for 3d holographic and volumetric imaging. The TPLs achieved an optical output power in excess of 500 mW in the wavelength range between 633 nm and 638 nm. The simultaneous inclusion of a distributed Bragg reflector (DBR) surface grating provides wavelength selectivity and hence a spectral purity with a width Δλ < 5 pm. These properties allow dense spectral multiplexing to achieve output powers of several watts, which would be required for 3d volumetric display applications.

  5. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  6. Double depth-enhanced 3D integral imaging in projection-type system without diffuser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jiao, Xiao-xue; Sun, Yu; Xie, Yan; Liu, Shao-peng

    2015-05-01

    Integral imaging is a three dimensional (3D) display technology without any additional equipment. A new system is proposed in this paper which consists of the elemental images of real images in real mode (RIRM) and the ones of virtual images in real mode (VIRM). The real images in real mode are the same as the conventional integral images. The virtual images in real mode are obtained by changing the coordinates of the corresponding points in elemental images which can be reconstructed by the lens array in virtual space. In order to reduce the spot size of the reconstructed images, the diffuser in conventional integral imaging is given up in the proposed method. Then the spot size is nearly 1/20 of that in the conventional system. And an optical integral imaging system is constructed to confirm that our proposed method opens a new way for the application of the passive 3D display technology.

  7. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  8. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  9. Television Data Display System (TDDS)

    NASA Technical Reports Server (NTRS)

    Sendler, K.

    1972-01-01

    A television data display system at KSC is described which displays computer processed data derived from space vehicle launch and prelaunch tests. The general system capabilities and technical features are discussed in separate sections under the headings of: (1) operational use, (2) system description, (3) computer programs, (4) computer hardware, and (5) adaptability.

  10. An approach to 3D model fusion in GIS systems and its application in a future ECDIS

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Zhao, Depeng; Pan, Mingyang

    2016-04-01

    Three-dimensional (3D) computer graphics technology is widely used in various areas and causes profound changes. As an information carrier, 3D models are becoming increasingly important. The use of 3D models greatly helps to improve the cartographic expression and design. 3D models are more visually efficient, quicker and easier to understand and they can express more detailed geographical information. However, it is hard to efficiently and precisely fuse 3D models in local systems. The purpose of this study is to propose an automatic and precise approach to fuse 3D models in geographic information systems (GIS). It is the basic premise for subsequent uses of 3D models in local systems, such as attribute searching, spatial analysis, and so on. The basic steps of our research are: (1) pose adjustment by principal component analysis (PCA); (2) silhouette extraction by simple mesh silhouette extraction and silhouette merger; (3) size adjustment; (4) position matching. Finally, we implement the above methods in our system Automotive Intelligent Chart (AIC) 3D Electronic Chart Display and Information Systems (ECDIS). The fusion approach we propose is a common method and each calculation step is carefully designed. This approach solves the problem of cross-platform model fusion. 3D models can be from any source. They may be stored in the local cache or retrieved from Internet, or may be manually created by different tools or automatically generated by different programs. The system can be any kind of 3D GIS system.

  11. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  12. Flatbed-type omnidirectional three-dimensional display system using holographic lens array

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideya; Chikayama, Manabu; Yamada, Kenji

    2008-02-01

    We propose an omnidirectional three-dimensional (3D) display system for multiple users as an improved version of our previous thin natural 3D display based on the ray reconstruction method. This is a tool for communication around a 3D image among a small number of people. It is a flatbed-type autostereoscopic 3D display system. It consists of some flat panel displays and some holographic lens array sheets. Its notable feature is the ability to display natural 3D images which are visible to multiple viewers at the same time. Moreover, 3D real images float over the proposed flatbed-type display. Thus, proposed display allows two or more people surrounding it to simultaneously observe floating 3D images from their own viewpoints. The prototype display consists of two DMD (digital micromirror device) projectors and two holographic lens array sheets. The number of the 3D pixels about one holographic lens array sheet is 48×96. Reconstructed 3D images are superimposed over the display. Therefore, this can display a floating 3D image which size is 108 mm ×80.8 mm ×80.8 mm. This paper describes a flatbed-type omnidirectional 3D display system, and also describes the experimental results.

  13. Three-Dimensional Air Quality System (3D-AQS)

    NASA Astrophysics Data System (ADS)

    Engel-Cox, J.; Hoff, R.; Weber, S.; Zhang, H.; Prados, A.

    2007-12-01

    The 3-Dimensional Air Quality System (3DAQS) integrates remote sensing observations from a variety of platforms into air quality decision support systems at the U.S. Environmental Protection Agency (EPA), with a focus on particulate air pollution. The decision support systems are the Air Quality System (AQS) / AirQuest database at EPA, Infusing satellite Data into Environmental Applications (IDEA) system, the U.S. Air Quality weblog (Smog Blog) at UMBC, and the Regional East Atmospheric Lidar Mesonet (REALM). The project includes an end user advisory group with representatives from the air quality community providing ongoing feedback. The 3DAQS data sets are UMBC ground based LIDAR, and NASA and NOAA satellite data from MODIS, OMI, AIRS, CALIPSO, MISR, and GASP. Based on end user input, we are co-locating these measurements to the EPA's ground-based air pollution monitors as well as re-gridding to the Community Multiscale Air Quality (CMAQ) model grid. These data provide forecasters and the scientific community with a tool for assessment, analysis, and forecasting of U.S Air Quality. The third dimension and the ability to analyze the vertical transport of particulate pollution are provided by aerosol extinction profiles from the UMBC LIDAR and CALIPSO. We present examples of a 3D visualization tool we are developing to facilitate use of this data. We also present two specific applications of 3D-AQS data. The first is comparisons between PM2.5 monitor data and remote sensing aerosol optical depth (AOD) data, which show moderate agreement but variation with EPA region. The second is a case study for Baltimore, Maryland, as an example of 3D-analysis for a metropolitan area. In that case, some improvement is found in the PM2.5 /LIDAR correlations when using vertical aerosol information to calculate an AOD below the boundary layer.

  14. DVE flight test results of a sensor enhanced 3D conformal pilot support system

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick

    2015-06-01

    The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.

  15. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    NASA Astrophysics Data System (ADS)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite

  16. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and

  17. Tri-color composite volume H-PDLC grating and its application to 3D color autostereoscopic display.

    PubMed

    Wang, Kangni; Zheng, Jihong; Gao, Hui; Lu, Feiyue; Sun, Lijia; Yin, Stuart; Zhuang, Songlin

    2015-11-30

    A tri-color composite volume holographic polymer dispersed liquid crystal (H-PDLC) grating and its application to 3-dimensional (3D) color autostereoscopic display are reported in this paper. The composite volume H-PDLC grating consists of three different period volume H-PDLC sub-gratings. The longer period diffracts red light, the medium period diffracts the green light, and the shorter period diffracts the blue light. To record three different period gratings simultaneously, two photoinitiators are employed. The first initiator consists of methylene blue and p-toluenesulfonic acid and the second initiator is composed of Rose Bengal and N-phenyglycine. In this case, the holographic recording medium is sensitive to entire visible wavelengths, including red, green, and blue so that the tri-color composite grating can be written simultaneously by harnessing three different color laser beams. In the experiment, the red beam comes from a He-Ne laser with an output wavelength of 632.8 nm, the green beam comes from a Verdi solid state laser with an output wavelength of 532 nm, and the blue beam comes from a He-Cd laser with an output wavelength of 441.6 nm. The experimental results show that diffraction efficiencies corresponding to red, green, and blue colors are 57%, 75% and 33%, respectively. Although this diffraction efficiency is not perfect, it is high enough to demonstrate the effect of 3D color autostereoscopic display. PMID:26698768

  18. Advanced Three-Dimensional Display System

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2005-01-01

    A desktop-scale, computer-controlled display system, initially developed for NASA and now known as the VolumeViewer(TradeMark), generates three-dimensional (3D) images of 3D objects in a display volume. This system differs fundamentally from stereoscopic and holographic display systems: The images generated by this system are truly 3D in that they can be viewed from almost any angle, without the aid of special eyeglasses. It is possible to walk around the system while gazing at its display volume to see a displayed object from a changing perspective, and multiple observers standing at different positions around the display can view the object simultaneously from their individual perspectives, as though the displayed object were a real 3D object. At the time of writing this article, only partial information on the design and principle of operation of the system was available. It is known that the system includes a high-speed, silicon-backplane, ferroelectric-liquid-crystal spatial light modulator (SLM), multiple high-power lasers for projecting images in multiple colors, a rotating helix that serves as a moving screen for displaying voxels [volume cells or volume elements, in analogy to pixels (picture cells or picture elements) in two-dimensional (2D) images], and a host computer. The rotating helix and its motor drive are the only moving parts. Under control by the host computer, a stream of 2D image patterns is generated on the SLM and projected through optics onto the surface of the rotating helix. The system utilizes a parallel pixel/voxel-addressing scheme: All the pixels of the 2D pattern on the SLM are addressed simultaneously by laser beams. This parallel addressing scheme overcomes the difficulty of achieving both high resolution and a high frame rate in a raster scanning or serial addressing scheme. It has been reported that the structure of the system is simple and easy to build, that the optical design and alignment are not difficult, and that the

  19. FY05 Xradia 3D (mu)XCT System Accomplishments

    SciTech Connect

    Martz, Jr., H E; Brown, W D

    2005-08-26

    The Xradia 3D {mu}XCT system was delivered to LLNL on April 5, 2005. The system became operational the week of April 11, 2005. The Xradia 3D {mu}XCT system has been extensively used to scan several high-energy density physics (see Table 1) and other programmatic (NIF, E&E and DNT) materials, components and full assemblies. In this summary we only focus on the HEDP program. X-ray radiographs and tomograms of materials such as aerogel foams and gradient density reservoirs are being used to better understand material synthesis. Radiographs and tomograms of components include a glass capsule encapsulated within a 50-mg/cm{sup 3} SiO{sub 2} aerogel foam and then machined to final outer dimensions, while full up assemblies include low-temperature Raleigh-Taylor (LoTRT) [Brown, et al. 2005] and DDP targets. We highlight two full up assembled targets: DDPs and LoTRTs. Representative X-ray digital radiographs are shown in Figures 1 and 2 for the DDP and LoTRT, respectively. The examples very clearly show that the assemblies were performed correctly.

  20. 3D in vitro modeling of the central nervous system

    PubMed Central

    Hopkins, Amy M.; DeSimone, Elise; Chwalek, Karolina; Kaplan, David L.

    2015-01-01

    There are currently more than 600 diseases characterized as affecting the central nervous system (CNS) which inflict neural damage. Unfortunately, few of these conditions have effective treatments available. Although significant efforts have been put into developing new therapeutics, drugs which were promising in the developmental phase have high attrition rates in late stage clinical trials. These failures could be circumvented if current 2D in vitro and in vivo models were improved. 3D, tissue-engineered in vitro systems can address this need and enhance clinical translation through two approaches: (1) bottom-up, and (2) top-down (developmental/regenerative) strategies to reproduce the structure and function of human tissues. Critical challenges remain including biomaterials capable of matching the mechanical properties and extracellular matrix (ECM) composition of neural tissues, compartmentalized scaffolds that support heterogeneous tissue architectures reflective of brain organization and structure, and robust functional assays for in vitro tissue validation. The unique design parameters defined by the complex physiology of the CNS for construction and validation of 3D in vitro neural systems are reviewed here. PMID:25461688

  1. Robotic 3D vision solder joint verification system evaluation

    SciTech Connect

    Trent, M.A.

    1992-02-01

    A comparative performance evaluation was conducted between a proprietary inspection system using intelligent 3D vision and manual visual inspection of solder joints. The purpose was to assess the compatibility and correlation of the automated system with current visual inspection criteria. The results indicated that the automated system was more accurate (> 90%) than visual inspection (60--70%) in locating and/or categorizing solder joint defects. In addition, the automated system can offer significant capabilities to characterize and monitor a soldering process by measuring physical attributes, such as solder joint volumes and wetting angles, which are not available through manual visual inspection. A more in-depth evaluation of this technology is recommended.

  2. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  3. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    NASA Astrophysics Data System (ADS)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  4. Calibration of an intensity ratio system for 3D imaging

    NASA Astrophysics Data System (ADS)

    Tsui, H. T.; Tang, K. C.

    1989-03-01

    An intensity ratio method for 3D imaging is proposed with error analysis given for assessment and future improvements. The method is cheap and reasonably fast as it requires no mechanical scanning or laborious correspondence computation. One drawback of the intensity ratio methods which hamper their widespread use is the undesirable change of image intensity. This is usually caused by the difference in reflection from different parts of an object surface and the automatic iris or gain control of the camera. In our method, gray-level patterns used include an uniform pattern, a staircase pattern and a sawtooth pattern to make the system more robust against errors in intensity ratio. 3D information of the surface points of an object can be derived from the intensity ratios of the images by triangulation. A reference back plane is put behind the object to monitor the change in image intensity. Errors due to camera calibration, projector calibration, variations in intensity, imperfection of the slides etc. are analyzed. Early experiments of the system using a newvicon CCTV camera with back plane intensity correction gives a mean-square range error of about 0.5 percent. Extensive analysis of various errors is expected to yield methods for improving the accuracy.

  5. Inertial Pocket Navigation System: Unaided 3D Positioning

    PubMed Central

    Munoz Diaz, Estefania

    2015-01-01

    Inertial navigation systems use dead-reckoning to estimate the pedestrian's position. There are two types of pedestrian dead-reckoning, the strapdown algorithm and the step-and-heading approach. Unlike the strapdown algorithm, which consists of the double integration of the three orthogonal accelerometer readings, the step-and-heading approach lacks the vertical displacement estimation. We propose the first step-and-heading approach based on unaided inertial data solving 3D positioning. We present a step detector for steps up and down and a novel vertical displacement estimator. Our navigation system uses the sensor introduced in the front pocket of the trousers, a likely location of a smartphone. The proposed algorithms are based on the opening angle of the leg or pitch angle. We analyzed our step detector and compared it with the state-of-the-art, as well as our already proposed step length estimator. Lastly, we assessed our vertical displacement estimator in a real-world scenario. We found that our algorithms outperform the literature step and heading algorithms and solve 3D positioning using unaided inertial data. Additionally, we found that with the pitch angle, five activities are distinguishable: standing, sitting, walking, walking up stairs and walking down stairs. This information complements the pedestrian location and is of interest for applications, such as elderly care. PMID:25897501

  6. Developmental neurotoxic effects of Malathion on 3D neurosphere system

    PubMed Central

    Salama, Mohamed; Lotfy, Ahmed; Fathy, Khaled; Makar, Maria; El-emam, Mona; El-gamal, Aya; El-gamal, Mohamed; Badawy, Ahmad; Mohamed, Wael M.Y.; Sobh, Mohamed

    2015-01-01

    Developmental neurotoxicity (DNT) refers to the toxic effects induced by various chemicals on brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have significant effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS; however, most of agents cannot be identified with certainty. This is because available animal models do not cover the whole spectrum of CNS developmental periods. A novel alternative method that can overcome most of the limitations of the conventional techniques is the use of 3D neurosphere system. This in-vitro system can recapitulate many of the changes during the period of brain development making it an ideal model for predicting developmental neurotoxic effects. In the present study we verified the possible DNT of Malathion, which is one of organophosphate pesticides with suggested possible neurotoxic effects on nursing children. Three doses of Malathion (0.25 μM, 1 μM and 10 μM) were used in cultured neurospheres for a period of 14 days. Malathion was found to affect proliferation, differentiation and viability of neurospheres, these effects were positively correlated to doses and time progress. This study confirms the DNT effects of Malathion on 3D neurosphere model. Further epidemiological studies will be needed to link these results to human exposure and effects data. PMID:27054080

  7. Inertial Pocket Navigation System: Unaided 3D Positioning.

    PubMed

    Diaz, Estefania Munoz

    2015-01-01

    Inertial navigation systems use dead-reckoning to estimate the pedestrian's position. There are two types of pedestrian dead-reckoning, the strapdown algorithm and the step-and-heading approach. Unlike the strapdown algorithm, which consists of the double integration of the three orthogonal accelerometer readings, the step-and-heading approach lacks the vertical displacement estimation. We propose the first step-and-heading approach based on unaided inertial data solving 3D positioning. We present a step detector for steps up and down and a novel vertical displacement estimator. Our navigation system uses the sensor introduced in the front pocket of the trousers, a likely location of a smartphone. The proposed algorithms are based on the opening angle of the leg or pitch angle. We analyzed our step detector and compared it with the state-of-the-art, as well as our already proposed step length estimator. Lastly, we assessed our vertical displacement estimator in a real-world scenario. We found that our algorithms outperform the literature step and heading algorithms and solve 3D positioning using unaided inertial data. Additionally, we found that with the pitch angle, five activities are distinguishable: standing, sitting, walking, walking up stairs and walking down stairs. This information complements the pedestrian location and is of interest for applications, such as elderly care. PMID:25897501

  8. Dynamical Systems Analysis of Fully 3D Ocean Features

    NASA Astrophysics Data System (ADS)

    Pratt, L. J.

    2011-12-01

    Dynamical systems analysis of transport and stirring processes has been developed most thoroughly for 2D flow fields. The calculation of manifolds, turnstile lobes, transport barriers, etc. based on observations of the ocean is most often conducted near the sea surface, whereas analyses at depth, usually carried out with model output, is normally confined to constant-z surfaces. At the meoscale and larger, ocean flows are quasi 2D, but smaller scale (submesoscale) motions, including mixed layer phenomena with significant vertical velocity, may be predominantly 3D. The zoology of hyperbolic trajectories becomes richer in such cases and their attendant manifolds are much more difficult to calculate. I will describe some of the basic geometrical features and corresponding Lagrangian Coherent Features expected to arise in upper ocean fronts, eddies, and Langmuir circulations. Traditional GFD models such as the rotating can flow may capture the important generic features. The dynamical systems approach is most helpful when these features are coherent and persistent and the implications and difficulties for this requirement in fully 3D flows will also be discussed.

  9. Modeling moving systems with RELAP5-3D

    DOE PAGESBeta

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; Kyle, Matt R.

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  10. Modeling moving systems with RELAP5-3D

    SciTech Connect

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; Kyle, Matt R.

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the accelerating frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.

  11. 3D Additive Construction with Regolith for Surface Systems

    NASA Technical Reports Server (NTRS)

    Mueller, Robert P.

    2014-01-01

    Planetary surface exploration on Asteroids, the Moon, Mars and Martian Moons will require the stabilization of loose, fine, dusty regolith to avoid the effects of vertical lander rocket plume impingement, to keep abrasive and harmful dust from getting lofted and for dust free operations. In addition, the same regolith stabilization process can be used for 3 Dimensional ( 3D) printing, additive construction techniques by repeating the 2D stabilization in many vertical layers. This will allow in-situ construction with regolith so that materials will not have to be transported from Earth. Recent work in the NASA Kennedy Space Center (KSC) Surface Systems Office (NE-S) Swamp Works and at the University of Southern California (USC) under two NASA Innovative Advanced Concept (NIAC) awards have shown promising results with regolith (crushed basalt rock) materials for in-situ heat shields, bricks, landing/launch pads, berms, roads, and other structures that could be fabricated using regolith that is sintered or mixed with a polymer binder. The technical goals and objectives of this project are to prove the feasibility of 3D printing additive construction using planetary regolith simulants and to show that they have structural integrity and practical applications in space exploration.

  12. Code System to Simulate 3D Tracer Dispersion in Atmosphere.

    2002-01-25

    Version 00 SHREDI is a shielding code system which executes removal-diffusion computations for bi-dimensional shields in r-z or x-y geometries. It may also deal with monodimensional problems (infinitely high cylinders or slabs). MESYST can simulate 3D tracer dispersion in the atmosphere. Three programs are part of this system: CRE_TOPO prepares the terrain data for MESYST. NOABL calculates three-dimensional free divergence windfields over complex terrain. PAS computes tracer concentrations and depositions on a given domain. Themore » purpose of this work is to develop a reliable simulation tool for pollutant atmospheric dispersion, which gives a realistic approach and allows one to compute the pollutant concentrations over complex terrains with good accuracy. The factional brownian model, which furnishes more accurate concentration values, is introduced to calculate pollutant atmospheric dispersion. The model was validated on SIESTA international experiments.« less

  13. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  14. Large TV display system

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1986-01-01

    A relatively small and low cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.

  15. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  16. ARCHAEO-SCAN: Portable 3D shape measurement system for archaeological field work

    NASA Astrophysics Data System (ADS)

    Knopf, George K.; Nelson, Andrew J.

    2004-10-01

    Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.

  17. Virtual surgical operation system using volume scanning display

    NASA Astrophysics Data System (ADS)

    Kameyama, Ken-ichi; Ohtomi, Koichi; Ohhashi, Akinami; Iseki, Hiroshi; Kobayashi, Naotoshi; Takakura, Kintomo

    1994-05-01

    This paper describes an interactive 3-D display system for supporting image-guided surgery. Different from conventional CRT-based medical display systems, this one can provide true 3- D images of the patient's anatomical structures in a physical 3-D space. Furthermore, various tools for view control, target definition, and simple treatment simulation, have been developed and can be used for directly manipulating these images. This feature is very useful for a surgeon to intuitively recognize the precise position of a lesion and other structures and to plan a more accurate treatment. The hardware system is composed of a volume scanning 3-D display for 3-D real image presentation, a 3-D wireless mouse for direct manipulation in a 3-D space, and a workstation for the data control of these devices. The software is for analyzing X-CT, MRI, or SPECT images and for organizing the tools for treatment planning. The system is currently aimed at being used for stereotactic neurosurgical operations.

  18. Study on portable optical 3D coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  19. 3D active stabilization system with sub-micrometer resolution.

    PubMed

    Kursu, Olli; Tuukkanen, Tuomas; Rahkonen, Timo; Vähäsöyrinki, Mikko

    2012-01-01

    Stable positioning between a measurement probe and its target from sub- to few micrometer scales has become a prerequisite in precision metrology and in cellular level measurements from biological tissues. Here we present a 3D stabilization system based on an optoelectronic displacement sensor and custom piezo-actuators driven by a feedback control loop that constantly aims to zero the relative movement between the sensor and the target. We used simulations and prototyping to characterize the developed system. Our results show that 95% attenuation of movement artifacts is achieved at 1 Hz with stabilization performance declining to ca. 70% attenuation at 10 Hz. Stabilization bandwidth is limited by mechanical resonances within the displacement sensor that occur at relatively low frequencies, and are attributable to the sensor's high force sensitivity. We successfully used brain derived micromotion trajectories as a demonstration of complex movement stabilization. The micromotion was reduced to a level of ∼1 µm with nearly 100 fold attenuation at the lower frequencies that are typically associated with physiological processes. These results, and possible improvements of the system, are discussed with a focus on possible ways to increase the sensor's force sensitivity without compromising overall system bandwidth. PMID:22900045

  20. 3D Active Stabilization System with Sub-Micrometer Resolution

    PubMed Central

    Rahkonen, Timo; Vähäsöyrinki, Mikko

    2012-01-01

    Stable positioning between a measurement probe and its target from sub- to few micrometer scales has become a prerequisite in precision metrology and in cellular level measurements from biological tissues. Here we present a 3D stabilization system based on an optoelectronic displacement sensor and custom piezo-actuators driven by a feedback control loop that constantly aims to zero the relative movement between the sensor and the target. We used simulations and prototyping to characterize the developed system. Our results show that 95 % attenuation of movement artifacts is achieved at 1 Hz with stabilization performance declining to ca. 70 % attenuation at 10 Hz. Stabilization bandwidth is limited by mechanical resonances within the displacement sensor that occur at relatively low frequencies, and are attributable to the sensor's high force sensitivity. We successfully used brain derived micromotion trajectories as a demonstration of complex movement stabilization. The micromotion was reduced to a level of ∼1 µm with nearly 100 fold attenuation at the lower frequencies that are typically associated with physiological processes. These results, and possible improvements of the system, are discussed with a focus on possible ways to increase the sensor's force sensitivity without compromising overall system bandwidth. PMID:22900045

  1. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing

    NASA Astrophysics Data System (ADS)

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is +/-0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient.

  2. GeoCube: A 3D mineral resources quantitative prediction and assessment system

    NASA Astrophysics Data System (ADS)

    Li, Ruixi; Wang, Gongwen; Carranza, Emmanuel John Muico

    2016-04-01

    This paper introduces a software system (GeoCube) for three dimensional (3D) extraction and integration of exploration criteria from spatial data. The software system contains four key modules: (1) Import and Export, supporting many formats from commercial 3D geological modeling software and offering various export options; (2) pre-process, containing basic statistics and fractal/multi-fractal methods (concentration-volume (C-V) fractal method) for extraction of exploration criteria from spatial data (i.e., separation of geological, geochemical and geophysical anomalies from background values in 3D space); (3) assessment, supporting five data-driven integration methods (viz., information entropy, logistic regression, ordinary weights of evidence, weighted weights of evidence, boost weights of evidence) for integration of exploration criteria; and (4) post-process, for classifying integration outcomes into several levels based on mineralization potentiality. The Nanihu Mo (W) camp (5.0 km×4.0 km×2.7 km) of the Luanchuan region was used as a case study. The results show that GeoCube can enhance the use of 3D geological modeling to store, retrieve, process, display, analyze and integrate exploration criteria. Furthermore, it was found that the ordinary weights of evidence, boost weights of evidence and logistic regression methods showed superior performance as integration tools for exploration targeting in this case study.

  3. Repositioning accuracy of two different mask systems-3D revisited: Comparison using true 3D/3D matching with cone-beam CT

    SciTech Connect

    Boda-Heggemann, Judit . E-mail: judit.boda-heggemann@radonk.ma.uni-heidelberg.de; Walter, Cornelia; Rahn, Angelika; Wertz, Hansjoerg; Loeb, Iris; Lohr, Frank; Wenz, Frederik

    2006-12-01

    Purpose: The repositioning accuracy of mask-based fixation systems has been assessed with two-dimensional/two-dimensional or two-dimensional/three-dimensional (3D) matching. We analyzed the accuracy of commercially available head mask systems, using true 3D/3D matching, with X-ray volume imaging and cone-beam CT. Methods and Materials: Twenty-one patients receiving radiotherapy (intracranial/head-and-neck tumors) were evaluated (14 patients with rigid and 7 with thermoplastic masks). X-ray volume imaging was analyzed online and offline separately for the skull and neck regions. Translation/rotation errors of the target isocenter were analyzed. Four patients were treated to neck sites. For these patients, repositioning was aided by additional body tattoos. A separate analysis of the setup error on the basis of the registration of the cervical vertebra was performed. The residual error after correction and intrafractional motility were calculated. Results: The mean length of the displacement vector for rigid masks was 0.312 {+-} 0.152 cm (intracranial) and 0.586 {+-} 0.294 cm (neck). For the thermoplastic masks, the value was 0.472 {+-} 0.174 cm (intracranial) and 0.726 {+-} 0.445 cm (neck). Rigid masks with body tattoos had a displacement vector length in the neck region of 0.35 {+-} 0.197 cm. The intracranial residual error and intrafractional motility after X-ray volume imaging correction for rigid masks was 0.188 {+-} 0.074 cm, and was 0.134 {+-} 0.14 cm for thermoplastic masks. Conclusions: The results of our study have demonstrated that rigid masks have a high intracranial repositioning accuracy per se. Given the small residual error and intrafractional movement, thermoplastic masks may also be used for high-precision treatments when combined with cone-beam CT. The neck region repositioning accuracy was worse than the intracranial accuracy in both cases. However, body tattoos and image guidance improved the accuracy. Finally, the combination of both mask

  4. Multifunction display system, volume 1

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design and construction of a multifunction display man/machine interface for use with a 4 pi IBM-360 System are described. The system is capable of displaying superimposed volatile alphanumeric and graphical data on a 512 x 512 element plasma panel, and holographically stored multicolor archival information. The volatile data may be entered from a keyboard or by means of an I/O interface to the 360 system. A 2-page memory local to the display is provided for storing the entered data. The archival data is stored as a phase hologram on a vinyl tape strip. This data is accessible by means of a rapid transport system which responds to inputs provided by the I/O channel on the keyboard. As many as 500 frames may be stored on a tape strip for access in under 6 seconds.

  5. 3D cutting tool inspection system and its key technologies

    NASA Astrophysics Data System (ADS)

    Du, X. M.; Chen, T.; Zou, X. J.; Harding, K. G.

    2009-08-01

    Cutting tools are an essential component used in manufacturing parts for different products. Many cutting tools are manufactured with complex geometric shapes and sharp and/or curved edges. As such, maintaining quality control of cutting tools during their fabrication may be essential to controlling the quality of components manufactured using the cutting tools. In this paper, a 3D cutting tool inspection system, is presented. The architecture of the system, the cutter inspection workflow and some key technologies are discussed. The relative key technologies include two aspects. The first aspect is the system extrinsic self-calibration method for ensuring the system accuracy. This paper will elaborate on how to calibrate the orientation and location of the rotary stage in the coordination system, including the relative relationship between the axis of the chuck used to hold the tool and the rotary axis used to position the tool, along with the relative relationship between Z stage and rotary axis. Further, this paper will analyze self-calibration solutions for separately correcting the error of the squareness and optical measuring beam and the error of the alignment between a side scan and a tip scan. The second aspect this paper will address is a method of scan planning for automatic and effective data collection. Tool measurement planning plays a big role in saving tool measurement time, improving data accuracy, as well as ensuring data completeness. Ths paper will present a round-part oriented measurement method that includes coarse/fine section scans that aim at getting 2D section geometry in a progressive manner, covering the key sharp/curved edge areas, and the side helical scan combined with the tip round scan for shape-simulated full geometry capture. Finally, this paper will present experimental results and some field tests data.

  6. 3D transrectal ultrasound prostate biopsy using a mechanical imaging and needle-guidance system

    NASA Astrophysics Data System (ADS)

    Bax, Jeffrey; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Gil, Elena; Bluvol, Jeremy; Knight, Kerry; Smith, David; Romagnoli, Cesare; Fenster, Aaron

    2008-03-01

    Prostate biopsy procedures are generally limited to 2D transrectal ultrasound (TRUS) imaging for biopsy needle guidance. This limitation results in needle position ambiguity and an insufficient record of biopsy core locations in cases of prostate re-biopsy. We have developed a multi-jointed mechanical device that supports a commercially available TRUS probe with an integrated needle guide for precision prostate biopsy. The device is fixed at the base, allowing the joints to be manually manipulated while fully supporting its weight throughout its full range of motion. Means are provided to track the needle trajectory and display this trajectory on a corresponding TRUS image. This allows the physician to aim the needle-guide at predefined targets within the prostate, providing true 3D navigation. The tracker has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe to generate 3D images. The tracker reduces the variability associated with conventional hand-held probes, while preserving user familiarity and procedural workflow. In a prostate phantom, biopsy needles were guided to within 2 mm of their targets, and the 3D location of the biopsy core was accurate to within 3 mm. The 3D navigation system is validated in the presence of prostate motion in a preliminary patient study.

  7. Sensorized Garment Augmented 3D Pervasive Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Gulrez, Tauseef; Tognetti, Alessandro; de Rossi, Danilo

    Virtual reality (VR) technology has matured to a point where humans can navigate in virtual scenes; however, providing them with a comfortable fully immersive role in VR remains a challenge. Currently available sensing solutions do not provide ease of deployment, particularly in the seated position due to sensor placement restrictions over the body, and optic-sensing requires a restricted indoor environment to track body movements. Here we present a 52-sensor laden garment interfaced with VR, which offers both portability and unencumbered user movement in a VR environment. This chapter addresses the systems engineering aspects of our pervasive computing solution of the interactive sensorized 3D VR and presents the initial results and future research directions. Participants navigated in a virtual art gallery using natural body movements that were detected by their wearable sensor shirt and then mapped the signals to electrical control signals responsible for VR scene navigation. The initial results are positive, and offer many opportunities for use in computationally intelligentman-machine multimedia control.

  8. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  9. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  10. Overestimation of heights in virtual reality is influenced more by perceived distal size than by the 2-D versus 3-D dimensionality of the display

    NASA Technical Reports Server (NTRS)

    Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)

    2002-01-01

    One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.

  11. A new method to enlarge a range of continuously perceived depth in DFD (depth-fused 3D) display

    NASA Astrophysics Data System (ADS)

    Tsunakawa, Atsuhiro; Soumiya, Tomoki; Horikawa, Yuta; Yamamoto, Hirotsugu; Suyama, Shiro

    2013-03-01

    We can successfully solve the problem in DFD display that the maximum depth difference of front and rear planes is limited because depth fusing from front and rear images to one 3-D image becomes impossible. The range of continuously perceived depth was estimated as depth difference of front and rear planes increases. When the distance was large enough, perceived depth was near front plane at 0~40 % of rear luminance and near rear plane at 60~100 % of rear luminance. This maximum depth range can be successfully enlarged by spatial-frequency modulation of front and rear images. The change of perceived depth dependence was evaluated when high frequency component of front and rear images is cut off using Fourier Transformation at the distance between front and rear plane of 5 and 10 cm (4.9 and 9.4 minute of arc). When high frequency component does not cut off enough at the distance of 5 cm, perceived depth was separated to near front plane and near rear plane. However, when the images are blurred enough by cutting high frequency component, the perceived depth has a linear dependency on luminance ratio. When the images are not blurred at the distance of 10 cm, perceived depth is separated to near front plane at 0~30% of rear luminance, near rear plane at 80~100 % and near midpoint at 40~70 %. However, when the images are blurred enough, perceived depth successfully has a linear dependency on luminance ratio.

  12. 3D characterization of the Astor Pass geothermal system, Nevada

    SciTech Connect

    Mayhew, Brett; Faulds, James E

    2013-10-19

    The Astor Pass geothermal system resides in the northwestern part of the Pyramid Lake Paiute Reservation, on the margins of the Basin and Range and Walker Lane tectonic provinces in northwestern Nevada. Seismic reflection interpretation, detailed analysis of well cuttings, stress field analysis, and construction of a 3D geologic model have been used in the characterization of the stratigraphic and structural framework of the geothermal area. The area is primarily comprised of middle Miocene Pyramid sequence volcanic and sedimentary rocks, nonconformably overlying Mesozoic metamorphic and granitic rocks. Wells drilled at Astor Pass show a ~1 km thick section of highly transmissive Miocene volcanic reservoir with temperatures of ~95°C. Seismic reflection interpretation confirms a high fault density in the geothermal area, with many possible fluid pathways penetrating into the relatively impermeable Mesozoic basement. Stress field analysis using borehole breakout data reveals a complex transtensional faulting regime with a regionally consistent west-northwest-trending least principal stress direction. Considering possible strike-slip and normal stress regimes, the stress data were utilized in a slip and dilation tendency analysis of the fault model, which suggests two promising fault areas controlling upwelling geothermal fluids. Both of these fault intersection areas show positive attributes for controlling geothermal fluids, but hydrologic tests show the ~1 km thick volcanic section is highly transmissive. Thus, focused upwellings along discrete fault conduits may be confined to the Mesozoic basement before fluids diffuse into the Miocene volcanic reservoir above. This large diffuse reservoir in the faulted Miocene volcanic rocks is capable of sustaining high pump rates. Understanding this type of system may be helpful in examining large, permeable reservoirs in deep sedimentary basins of the eastern Basin and Range and the highly fractured volcanic geothermal

  13. 3D real holographic image movies are projected into a volumetric display using dynamic digital micromirror device (DMD) holograms.

    NASA Astrophysics Data System (ADS)

    Huebschman, Michael L.; Hunt, Jeremy; Garner, Harold R.

    2006-04-01

    The Texas Instruments Digital Micromirror Device (DMD) is being used as the recording medium for display of pre-calculated digital holograms. The high intensity throughput of the reflected laser light from DMD holograms enables volumetric display of projected real images as well as virtual images. A single DMD and single laser projector system has been designed to reconstruct projected images in a 6''x 6''x 4.5'' volumetric display. The volumetric display is composed of twenty-four, 6''-square, PSCT liquid crystal plates which are each cycled on and off to reduce unnecessary scatter in the volume. The DMD is an XGA format array, 1024x768, with 13.6 micron pitch mirrors. This holographic projection system has been used in the assessment of hologram image resolution, maximum image size, optical focusing of the real image, image look-around, and physiological depth cues. Dynamic movement images are projected by transferring the appropriately sequenced holograms to the DMD at movie frame rates.

  14. Information retrieval and display system

    NASA Technical Reports Server (NTRS)

    Groover, J. L.; King, W. L.

    1977-01-01

    Versatile command-driven data management system offers users, through simplified command language, a means of storing and searching data files, sorting data files into specified orders, performing simple or complex computations, effecting file updates, and printing or displaying output data. Commands are simple to use and flexible enough to meet most data management requirements.

  15. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  16. Real-time three-dimensional pickup and display system based on integral photography

    NASA Astrophysics Data System (ADS)

    Okano, Fumio; Arai, Jun; Hoshino, Haruo; Yuyama, Ichiro

    1998-12-01

    A real-time three-dimensional (3-D) pickup and display setup called a Real-time IP system is proposed. In this system, erected real images of an object are formed by a GRIN lens array as element images, and are directly shot by a television camera. The video signal of a group of element images is transmitted to display device that combines a liquid crystal panel display and a convex micro-lens array, producing a color 3-D image in real-time. Full-color and autostereoscopic 3-D images with full-parallax can be observed. We confirmed the possibility of the 3-D television system.

  17. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present. PMID:27410336

  18. 3D reconstruction of tropospheric cirrus clouds by stereovision system

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Moreels, Guy; Seridi, Hamid

    2016-07-01

    A stereo imaging method is applied to measure the altitude of cirrus clouds and provide a 3D map of the altitude of the layer centroid. They are located in the high troposphere and, sometimes in the lower stratosphere, between 6 and 10 km high. Two simultaneous images of the same scene are taken with Canon cameras (400D) in two sites distant of 37 Km. Each image processed in order to invert the perspective effect and provide a satellite-type view of the layer. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a correlation coefficient (ZNCC: Zero mean Normalized Cross-correlation or ZSSD: as Zero mean Sum of Squared Differences). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in June 2014 in France. The images were taken simultaneously at Marnay (47°17'31.5" N, 5°44'58.8" E; altitude 275 m) 25 km northwest of Besancon and in Mont poupet (46°58'31.5" N, 5°52'22.7" E; altitude 600 m) southwest of Besancon at 43 km. 3D maps of the Natural cirrus clouds and artificial like "aircraft trails" are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the cirrus barycenter is located at 8.5 ± 1km on June 11.

  19. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  20. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  1. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  2. System and method for 3D printing of aerogels

    DOEpatents

    Worsley, Marcus A.; Duoss, Eric; Kuntz, Joshua; Spadaccini, Christopher; Zhu, Cheng

    2016-03-08

    A method of forming an aerogel. The method may involve providing a graphene oxide powder and mixing the graphene oxide powder with a solution to form an ink. A 3D printing technique may be used to write the ink into a catalytic solution that is contained in a fluid containment member to form a wet part. The wet part may then be cured in a sealed container for a predetermined period of time at a predetermined temperature. The cured wet part may then be dried to form a finished aerogel part.

  3. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    PubMed

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice. PMID:19651551

  4. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  5. Virtual 3D interactive system with embedded multiwavelength optical sensor array and sequential devices

    NASA Astrophysics Data System (ADS)

    Wang, Guo-Zhen; Huang, Yi-Pai; Hu, Kuo-Jui

    2012-06-01

    We proposed a virtual 3D-touch system by bare finger, which can detect the 3-axis (x, y, z) information of finger. This system has multi-wavelength optical sensor array embedded on the backplane of TFT panel and sequentail devices on the border of TFT panel. We had developed reflecting mode which can be worked by bare finger for the 3D interaction. A 4-inch mobile 3D-LCD with this proposed system was successfully been demonstrated already.

  6. F-22 cockpit display system

    NASA Astrophysics Data System (ADS)

    Bailey, David C.

    1994-06-01

    The F-22 is the first exclusively glass cockpit where all instrumentation has been replaced by displays. The F-22 Engineering and Manufacturing Development Program is implementing the display technology proven during the Advanced Tactical Fighter Demonstration and Validation program. This paper will describe how the F-22 goals have been met and some of the tradeoffs that resulted in the current display design.

  7. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time. PMID:26954841

  8. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  9. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image

  10. Novel low-cost 2D/3D switchable autostereoscopic system for notebook computers and other portable devices

    NASA Astrophysics Data System (ADS)

    Eichenlaub, Jesse B.

    1995-03-01

    Mounting a lenticular lens in front of a flat panel display is a well known, inexpensive, and easy way to create an autostereoscopic system. Such a lens produces half resolution 3D images because half the pixels on the LCD are seen by the left eye and half by the right eye. This may be acceptable for graphics, but it makes full resolution text, as displayed by common software, nearly unreadable. Very fine alignment tolerances normally preclude the possibility of removing and replacing the lens in order to switch between 2D and 3D applications. Lenticular lens based displays are therefore limited to use as dedicated 3D devices. DTI has devised a technique which removes this limitation, allowing switching between full resolution 2D and half resolution 3D imaging modes. A second element, in the form of a concave lenticular lens array whose shape is exactly the negative of the first lens, is mounted on a hinge so that it can be swung down over the first lens array. When so positioned the two lenses cancel optically, allowing the user to see full resolution 2D for text or numerical applications. The two lenses, having complementary shapes, naturally tend to nestle together and snap into perfect alignment when pressed together--thus obviating any need for user operated alignment mechanisms. This system represents an ideal solution for laptop and notebook computer applications. It was devised to meet the stringent requirements of a laptop computer manufacturer including very compact size, very low cost, little impact on existing manufacturing or assembly procedures, and compatibility with existing full resolution 2D text- oriented software as well as 3D graphics. Similar requirements apply to high and electronic calculators, several models of which now use LCDs for the display of graphics.

  11. Simplified Night Sky Display System

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy P.

    2010-01-01

    A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.

  12. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  13. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice. PMID:22004854

  14. 3D scanning and 3D printing as innovative technologies for fabricating personalized topical drug delivery systems.

    PubMed

    Goyanes, Alvaro; Det-Amornrat, Usanee; Wang, Jie; Basit, Abdul W; Gaisford, Simon

    2016-07-28

    Acne is a multifactorial inflammatory skin disease with high prevalence. In this work, the potential of 3D printing to produce flexible personalised-shape anti-acne drug (salicylic acid) loaded devices was demonstrated by two different 3D printing (3DP) technologies: Fused Deposition Modelling (FDM) and stereolithography (SLA). 3D scanning technology was used to obtain a 3D model of a nose adapted to the morphology of an individual. In FDM 3DP, commercially produced Flex EcoPLA™ (FPLA) and polycaprolactone (PCL) filaments were loaded with salicylic acid by hot melt extrusion (HME) (theoretical drug loading - 2% w/w) and used as feedstock material for 3D printing. Drug loading in the FPLA-salicylic acid and PCL-salicylic acid 3D printed patches was 0.4% w/w and 1.2% w/w respectively, indicating significant thermal degradation of drug during HME and 3D printing. Diffusion testing in Franz cells using a synthetic membrane revealed that the drug loaded printed samples released <187μg/cm(2) within 3h. FPLA-salicylic acid filament was successfully printed as a nose-shape mask by FDM 3DP, but the PCL-salicylic acid filament was not. In the SLA printing process, the drug was dissolved in different mixtures of poly(ethylene glycol) diacrylate (PEGDA) and poly(ethylene glycol) (PEG) that were solidified by the action of a laser beam. SLA printing led to 3D printed devices (nose-shape) with higher resolution and higher drug loading (1.9% w/w) than FDM, with no drug degradation. The results of drug diffusion tests revealed that drug diffusion was faster than with the FDM devices, 229 and 291μg/cm(2) within 3h for the two formulations evaluated. In this study, SLA printing was the more appropriate 3D printing technology to manufacture anti-acne devices with salicylic acid. The combination of 3D scanning and 3D printing has the potential to offer solutions to produce personalised drug loaded devices, adapted in shape and size to individual patients. PMID:27189134

  15. Wide angle holographic display system with spatiotemporal multiplexing.

    PubMed

    Kozacki, Tomasz; Finke, Grzegorz; Garbat, Piotr; Zaperty, Weronika; Kujawińska, Małgorzata

    2012-12-01

    This paper presents a wide angle holographic display system with extended viewing angle in both horizontal and vertical directions. The display is constructed from six spatial light modulators (SLM) arranged on a circle and an additional SLM used for spatiotemporal multiplexing and a viewing angle extension in two perpendicular directions. The additional SLM, that is synchronized with the SLMs on the circle is placed in the image space. This method increases effective space bandwidth product of display system data from 12.4 to 50 megapixels. The software solution based on three Nvidia graphic cards is developed and implemented in order to achieve fast and synchronized displaying. The experiments presented for both synthetic and real 3D data prove the possibility to view binocularly having good quality images reconstructed in full FoV of the display. PMID:23262697

  16. Development of Land Analysis System display modules

    NASA Technical Reports Server (NTRS)

    Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie

    1986-01-01

    The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.

  17. Six-Message Electromechanical Display System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.

    2007-01-01

    A proposed electromechanical display system would be capable of presenting as many as six distinct messages. In the proposed system, each display element would include a cylinder having a regular hexagonal cross section.

  18. UNIQUIMER 3D, a software system for structural DNA nanotechnology design, analysis and evaluation

    PubMed Central

    Zhu, Jinhao; Wei, Bryan; Yuan, Yuan; Mi, Yongli

    2009-01-01

    A user-friendly software system, UNIQUIMER 3D, was developed to design DNA structures for nanotechnology applications. It consists of 3D visualization, internal energy minimization, sequence generation and construction of motif array simulations (2D tiles and 3D lattices) functionalities. The system can be used to check structural deformation and design errors under scaled-up conditions. UNIQUIMER 3D has been tested on the design of both existing motifs (holiday junction, 4 × 4 tile, double crossover, DNA tetrahedron, DNA cube, etc.) and nonexisting motifs (soccer ball). The results demonstrated UNIQUIMER 3D's capability in designing large complex structures. We also designed a de novo sequence generation algorithm. UNIQUIMER 3D was developed for the Windows environment and is provided free of charge to the nonprofit research institutions. PMID:19228709

  19. A navigation system for flexible endoscopes using abdominal 3D ultrasound.

    PubMed

    Hoffmann, R; Kaar, M; Bathia, Amon; Bathia, Amar; Lampret, A; Birkfellner, W; Hummel, J; Figl, M

    2014-09-21

    A navigation system for flexible endoscopes equipped with ultrasound (US) scan heads is presented. In contrast to similar systems, abdominal 3D-US is used for image fusion of the pre-interventional computed tomography (CT) to the endoscopic US. A 3D-US scan, tracked with an optical tracking system (OTS), is taken pre-operatively together with the CT scan. The CT is calibrated using the OTS, providing the transformation from CT to 3D-US. Immediately before intervention a 3D-US tracked with an electromagnetic tracking system (EMTS) is acquired and registered intra-modal to the preoperative 3D-US. The endoscopic US is calibrated using the EMTS and registered to the pre-operative CT by an intra-modal 3D-US/3D-US registration. Phantom studies showed a registration error for the US to CT registration of 5.1 mm±2.8 mm. 3D-US/3D-US registration of patient data gave an error of 4.1 mm compared to 2.8 mm with the phantom. From this we estimate an error on patient experiments of 5.6 mm. PMID:25170913

  20. The modeling of portable 3D vision coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Liu, Shugui; Huang, Fengshan; Peng, Kai

    2005-02-01

    The portable three-dimensional vision coordinate measuring system, which consists of a light pen, a CCD camera and a laptop computer, can be widely applied in most coordinate measuring fields especially on the industrial spots. On the light pen there are at least three point-shaped light sources (LEDs) acting as the measured control characteristic points and a touch trigger probe with a spherical stylus which is used to contact the point to be measured. The most important character of this system is that three light sources and the probe stylus are aligned in one line with known positions. In building and studying this measuring system, how to construct the system"s mathematical model is the most key problem called perspective of three-collinear-points problem, which is a particular case of perspective of three-points problem (P3P). On the basis of P3P and spatial analytical geometry theory, the system"s mathematical model is established in this paper. What"s more, it is verified that perspective of three-collinear-points problem has a unique solution. And the analytical equations of the measured point"s coordinates are derived by using the system"s mathematical model and the restrict condition that three light sources and the probe stylus are aligned in one line. Finally, the effectiveness of the mathematical model is confirmed by experiments.

  1. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  2. Snapshot 3D optical coherence tomography system using image mappingspectrometry

    PubMed Central

    Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

    2013-01-01

    A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,λ) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 μm depth and 13.4μm in transverse resolution. Axial resolution of 16.0μm can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

  3. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  4. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  5. A 3-D fluorescence imaging system incorporating structured illumination technology

    NASA Astrophysics Data System (ADS)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  6. 3D two-photon lithographic microfabrication system

    DOEpatents

    Kim, Daekeun; So, Peter T. C.

    2011-03-08

    An imaging system is provided that includes a optical pulse generator for providing an optical pulse having a spectral bandwidth and includes monochromatic waves having different wavelengths. A dispersive element receives a second optical pulse associated with the optical pulse and disperses the second optical pulse at different angles on the surface of the dispersive element depending on wavelength. One or more focal elements receives the dispersed second optical pulse produced on the dispersive element. The one or more focal element recombine the dispersed second optical pulse at a focal plane on a specimen where the width of the optical pulse is restored at the focal plane.

  7. A hybrid-3D hillslope hydrological model for use in Earth system models

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Fang, Y.; Broxton, P.; Gochis, D.; Niu, G.-Y.; Pelletier, J. D.; Troch, P. A.; Zeng, X.

    2015-10-01

    Hillslope-scale rainfall-runoff processes leading to a fast catchment response are not explicitly included in land surface models (LSMs) for use in earth system models (ESMs) due to computational constraints. This study presents a hybrid-3D hillslope hydrological model (h3D) that couples a 1-D vertical soil column model with a lateral pseudo-2D saturated zone and overland flow model for use in ESMs. By representing vertical and lateral responses separately at different spatial resolutions, h3D is computationally efficient. The h3D model was first tested for three different hillslope planforms (uniform, convergent and divergent). We then compared h3D (with single and multiple soil columns) with a complex physically based 3-D model and a simple 1-D soil moisture model coupled with an unconfined aquifer (as typically used in LSMs). It is found that simulations obtained by the simple 1-D model vary considerably from the complex 3-D model and are not able to represent hillslope-scale variations in the lateral flow response. In contrast, the single soil column h3D model shows a much better performance and saves computational time by 2-3 orders of magnitude compared with the complex 3-D model. When multiple vertical soil columns are implemented, the resulting hydrological responses (soil moisture, water table depth, and base flow along the hillslope) from h3D are nearly identical to those predicted by the complex 3-D model, but still saves computational time. As such, the computational efficiency of the h3D model provides a valuable and promising approach to incorporating hillslope-scale hydrological processes into continental and global-scale ESMs.

  8. Characterization of 3D printing output using an optical sensing system

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    This paper presents the experimental design and initial testing of a system to characterize the progress and performance of a 3D printer. The system is based on five Raspberry Pi single-board computers. It collects images of the 3D printed object, which are compared to an ideal model. The system, while suitable for printers of all sizes, can potentially be produced at a sufficiently low cost to allow its incorporation into consumer-grade printers. The efficacy and accuracy of this system is presented and discussed. The paper concludes with a discussion of the benefits of being able to characterize 3D printer performance.

  9. Volumetric three-dimensional display system with rasterization hardware

    NASA Astrophysics Data System (ADS)

    Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua

    2001-06-01

    An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

  10. Developing tiled projection display systems

    SciTech Connect

    Hereld, M.; Judson, I. R.; Paris, J.; Stevens, R. L.

    2000-06-08

    Tiled displays are an emerging technology for constructing high-resolution semi-immersive visualization environments capable of presenting high-resolution images from scientific simulation [EVL, PowerWall]. In this way, they complement other technologies such as the CAVE [Cruz-Niera92] or ImmersaDesk, [Czernuszenko97], which by design give up pure resolution in favor of width of view and stereo. However, the largest impact may well be in using large-format tiled displays as one of possibly multiple displays in building ''information'' or ''active'' spaces that surround the user with diverse ways of interacting with data and multimedia information flows [IPSI, Childers00, Raskar98, ROME, Stanford, UNC]. These environments may prove to be the ultimate successor of the desktop metaphor for information technology work.

  11. A Desktop Computer Based Workstation for Display and Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Erickson, Bradley J.; Robb, Richard A.

    1987-01-01

    While great advances have been made in developing new and better ways to produce medical images, the technology to efficiently display and analyze them has lagged. This paper describes design considerations and development of a workstation based on an IBM PC/AT for the analysis of three and four dimensional medical image data. ImagesFigure 1Figure 2Figure 3Figure 4Figure 5Figure 6Figure 7Figure 8Figure 9

  12. Quantitative wound healing measurement and monitoring system based on an innovative 3D imaging system

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Yang, Arthur; Yin, Gongjie; Wen, James

    2011-03-01

    In this paper, we report a novel three-dimensional (3D) wound imaging system (hardware and software) under development at Technest Inc. System design is aimed to perform accurate 3D measurement and modeling of a wound and track its healing status over time. Accurate measurement and tracking of wound healing enables physicians to assess, document, improve, and individualize the treatment plan given to each wound patient. In current wound care practices, physicians often visually inspect or roughly measure the wound to evaluate the healing status. This is not an optimal practice since human vision lacks precision and consistency. In addition, quantifying slow or subtle changes through perception is very difficult. As a result, an instrument that quantifies both skin color and geometric shape variations would be particularly useful in helping clinicians to assess healing status and judge the effect of hyperemia, hematoma, local inflammation, secondary infection, and tissue necrosis. Once fully developed, our 3D imaging system will have several unique advantages over traditional methods for monitoring wound care: (a) Non-contact measurement; (b) Fast and easy to use; (c) up to 50 micron measurement accuracy; (d) 2D/3D Quantitative measurements;(e) A handheld device; and (f) Reasonable cost (< $1,000).

  13. Kinematics of a growth fault/raft system on the West African margin using 3-D restoration

    NASA Astrophysics Data System (ADS)

    Rouby, Delphine; Raillard, Stéphane; Guillocheau, François; Bouroullec, Renaud; Nalpas, Thierry

    2002-04-01

    The ability to quantify the movement history associated with growth structures is crucial in the understanding of fundamental processes such as the growth of folds or faults in 3-D. In this paper, we present an application of an original approach to restore in 3-D a listric growth fault system resulting from gravity-induced extension located on the West African margin. Our goal is to establish the 3-D structural framework and kinematics of the study area. We construct a 3-D geometrical model of the fault system (from 3-D seismic data), then restore six stratigraphic surfaces and reconstruct the 3-D geometry of the system at six incremental steps of its history. The evolution of the growth fault/raft system corresponds to the progressive separation of two rafts by regional extension, resulting in the development of an intervening basin located between them that evolved in three main stages: (1) the rise of an evaporite wall, (2) the development of a symmetric basin as the elevation of the diapir is reduced and buried, and (3) the development of asymmetric basins related to two systems of listric faults (the main fault F1 and the graben located between the rollovers and the lower raft). Important features of the growth fault/raft system could only be observed in 3-D and with increments of deformation restored. The rollover anticline (associated with the listric fault F1) is composed of two sub-units separated by an E-W oriented transverse graben indicating that the displacement field was divergent in map view. The rollover units are located within the overlap area of two fault systems and displays a 'mock-turtle' anticline structure. The seaward translation of the lower raft is associated with two successive vertical axis rotations in the opposite sense (clockwise then counter-clockwise by about 10°). This results from the fact that the two main fault systems developed successively. Fault system F1 formed during the Upper Albian, and the graben during the Cenomanian

  14. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology.

    PubMed

    Lee, Vivian K; Lanzi, Alison M; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A; Dai, Guohao

    2014-09-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns. PMID:25484989

  15. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology

    PubMed Central

    Lee, Vivian K.; Lanzi, Alison M.; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A.; Dai, Guohao

    2014-01-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns. PMID:25484989

  16. An accurate 3D inspection system using heterodyne multiple frequency phase-shifting algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Zhenzhong; Chee, Oichoo; Asundi, Anand

    This paper presents an accurate 3D inspection system for industrial applications, which uses digital fringe projection technology. The system consists of two CCD cameras and a DLP projector. The mathematical model of the 3D inspection system with 10 distortion parameters for each camera is proposed. A heterodyne multiple frequency phase-shifting algorithm is employed for overcoming the unwrapping problem of phase functions and for a reliable unwrapping procedure. The redundant phase information is used to increase the accuracy of the 3D reconstruction. To demonstrate the effectiveness of our system, a standard sphere was used for testing. The verification test for the 3D inspection systems are based on the VDI standard 2634. The result shows the proposed system can be used for industrial quality inspection with high measurement precision.

  17. Impact of the 3-D model strategy on science learning of the solar system

    NASA Astrophysics Data System (ADS)

    Alharbi, Mohammed

    The purpose of this mixed method study, quantitative and descriptive, was to determine whether the first-middle grade (seventh grade) students at Saudi schools are able to learn and use the Autodesk Maya software to interact and create their own 3-D models and animations and whether their use of the software influences their study habits and their understanding of the school subject matter. The study revealed that there is value to the science students regarding the use of 3-D software to create 3-D models to complete science assignments. Also, this study aimed to address the middle-school students' ability to learn 3-D software in art class, and then ultimately use it in their science class. The success of this study may open the way to consider the impact of 3-D modeling on other school subjects, such as mathematics, art, and geography. When the students start using graphic design, including 3-D software, at a young age, they tend to develop personal creativity and skills. The success of this study, if applied in schools, will provide the community with skillful young designers and increase awareness of graphic design and the new 3-D technology. Experimental method was used to answer the quantitative research question, are there significant differences applying the learning method using 3-D models (no 3-D, premade 3-D, and create 3-D) in a science class being taught about the solar system and its impact on the students' science achievement scores? Descriptive method was used to answer the qualitative research questions that are about the difficulty of learning and using Autodesk Maya software, time that students take to use the basic levels of Polygon and Animation parts of the Autodesk Maya software, and level of students' work quality.

  18. Quality Analysis of 3d Surface Reconstruction Using Multi-Platform Photogrammetric Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2016-06-01

    In recent years, the necessity of accurate 3D surface reconstruction has been more pronounced for a wide range of mapping, modelling, and monitoring applications. The 3D data for satisfying the needs of these applications can be collected using different digital imaging systems. Among them, photogrammetric systems have recently received considerable attention due to significant improvements in digital imaging sensors, emergence of new mapping platforms, and development of innovative data processing techniques. To date, a variety of techniques haven been proposed for 3D surface reconstruction using imagery collected by multi-platform photogrammetric systems. However, these approaches suffer from the lack of a well-established quality control procedure which evaluates the quality of reconstructed 3D surfaces independent of the utilized reconstruction technique. Hence, this paper aims to introduce a new quality assessment platform for the evaluation of the 3D surface reconstruction using photogrammetric data. This quality control procedure is performed while considering the quality of input data, processing procedures, and photo-realistic 3D surface modelling. The feasibility of the proposed quality control procedure is finally verified by quality assessment of the 3D surface reconstruction using images from different photogrammetric systems.

  19. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  20. A fast 3D reconstruction system with a low-cost camera accessory

    PubMed Central

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-01-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407

  1. A fast 3D reconstruction system with a low-cost camera accessory

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-06-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  2. Development of portable 3D optical measuring system using structured light projection method

    NASA Astrophysics Data System (ADS)

    Aoki, Hiroshi

    2014-05-01

    Three-dimensional (3D) scanners are becoming increasingly common in many industries. However most of these scanning technologies have drawbacks for practical use due to size, weight, accessibility, and ease-of-use. Depending on the application, speed, flexibility and portability can often be deemed more important than accuracy. We have developed a solution to address this market requirement and overcome the aforementioned limitations. To counteract shortcomings such as heavy weight and large size, an optical sensor is used that consists of a laser projector, a camera system, and a multi-touch screen. Structured laser light is projected onto the measured object with a newly designed laser projector employing a single Micro Electro Mechanical Systems (MEMS) mirror. The optical system is optimized for the combination of a Laser Diode (LD), the MEMS mirror and the size of measurement area to secure the ideal contrast of structured light. Also, we developed a new calibration algorithm for this sensor with MEMS laser projector that uses an optical camera model for point cloud calculation. These technical advancements make the sensor compact, save power consumption, and reduce heat generation yet still allows for rapid calculation. Due to the principle of the measurement, structured light triangulation utilizing phase-shifting technology, resolution is improved. To meet requirements for practical applications, the optics, electronics, image processing, display and data management capabilities have been integrated into a single compact unit.

  3. Display system for imaging scientific telemetric information

    NASA Technical Reports Server (NTRS)

    Zabiyakin, G. I.; Rykovanov, S. N.

    1979-01-01

    A system for imaging scientific telemetric information, based on the M-6000 minicomputer and the SIGD graphic display, is described. Two dimensional graphic display of telemetric information and interaction with the computer, in analysis and processing of telemetric parameters displayed on the screen is provided. The running parameter information output method is presented. User capabilities in the analysis and processing of telemetric information imaged on the display screen and the user language are discussed and illustrated.

  4. Recovery of liver motion and deformation due to respiration using laparoscopic freehand 3D ultrasound system.

    PubMed

    Nakamoto, Masahiko; Hirayama, Hiroaki; Sato, Yoshinobu; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto; Tamura, Shinichi

    2006-01-01

    This paper describes a rapid method for intraoperative recovery of liver motion and deformation due to respiration by using a laparoscopic freehand 3D ultrasound (US) system. Using the proposed method, 3D US images of the liver can be extended to 4D US images by acquiring additional several sequences of 2D US images during a couple of respiration cycles. Time-varying 2D US images are acquired on several sagittal image planes and their 3D positions and orientations are measured using a laparoscopic ultrasound probe to which a miniature magnetic 3D position sensor is attached. During the acquisition, the probe is assumed to move together with the liver surface. In-plane 2D deformation fields and respiratory phase are estimated from the time-varying 2D US images, and then the time-varying 3D deformation fields on the sagittal image planes are obtained by combining 3D positions and orientations of the image planes. The time-varying 3D deformation field of the volume is obtained by interpolating the 3D deformation fields estimated on several planes. The proposed method was evaluated by in vivo experiments using a pig liver. PMID:17354794

  5. Development of 3D Woven Ablative Thermal Protection Systems (TPS) for NASA Spacecraft

    NASA Technical Reports Server (NTRS)

    Feldman, Jay D.; Ellerby, Don; Stackpoole, Mairead; Peterson, Keith; Venkatapathy, Ethiraj

    2015-01-01

    The development of a new class of thermal protection system (TPS) materials known as 3D Woven TPS led by the Entry Systems and Technology Division of NASA Ames Research Center (ARC) will be discussed. This effort utilizes 3D weaving and resin infusion technologies to produce heat shield materials that are engineered and optimized for specific missions and requirements. A wide range of architectures and compositions have been produced and preliminarily tested to prove the viability and tailorability of the 3D weaving approach to TPS.

  6. Three-Dimensional Integrated Characterization and Archiving System (3D-ICAS). Phase 1

    SciTech Connect

    1994-07-01

    3D-ICAS is being developed to support Decontamination and Decommissioning operations for DOE addressing Research Area 6 (characterization) of the Program Research and Development Announcement. 3D-ICAS provides in-situ 3-dimensional characterization of contaminated DOE facilities. Its multisensor probe contains a GC/MS (gas chromatography/mass spectrometry using noncontact infrared heating) sensor for organics, a molecular vibrational sensor for base material identification, and a radionuclide sensor for radioactive contaminants. It will provide real-time quantitative measurements of volatile organics and radionuclides on bare materials (concrete, asbestos, transite); it will provide 3-D display of the fusion of all measurements; and it will archive the measurements for regulatory documentation. It consists of two robotic mobile platforms that operate in hazardous environments linked to an integrated workstation in a safe environment.

  7. Simplified night sky display system

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy P. (Inventor)

    2008-01-01

    A portable structure, simply constructed with inexpensive and generally lightweight materials, for displaying a selected portion of the night sky and selected planets, satellites, comets and other astronomically observable objects that are visually perceptible within that portion of the night sky. The structure includes a computer having stored signals representing the observable objects, an image projector that converts and projects the stored signals as visually perceptible images, a first curvilinear light-reflecting surface to receive and reflect the visually perceptible images, and a second curvilinear surface to receive and display the visually perceptible images reflected from the first surface. The images may be motionless or may move with passage of time. In one embodiment, the structure includes an inflatable screen surface that receives gas in an enclosed volume, supports itself without further mechanical support, and optionally self-regulates pressure of the received gas within the enclosed volume.

  8. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System.

    PubMed

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R² = 0.98) and 0.57 mm (R² = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  9. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  10. The virtual environment display system

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1991-01-01

    Virtual environment technology is a display and control technology that can surround a person in an interactive computer generated or computer mediated virtual environment. It has evolved at NASA-Ames since 1984 to serve NASA's missions and goals. The exciting potential of this technology, sometimes called Virtual Reality, Artificial Reality, or Cyberspace, has been recognized recently by the popular media, industry, academia, and government organizations. Much research and development will be necessary to bring it to fruition.

  11. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  12. Microscale screening systems for 3D cellular microenvironments: platforms, advances, and challenges

    PubMed Central

    Montanez-Sauri, Sara I.; Beebe, David J.; Sung, Kyung Eun

    2015-01-01

    The increasing interest in studying cells using more in vivo-like three-dimensional (3D) microenvironments has created a need for advanced 3D screening platforms with enhanced functionalities and increased throughput. 3D screening platforms that better mimic in vivo microenvironments with enhanced throughput would provide more in-depth understanding of the complexity and heterogeneity of microenvironments. The platforms would also better predict the toxicity and efficacy of potential drugs in physiologically relevant conditions. Traditional 3D culture models (e.g. spinner flasks, gyratory rotation devices, non-adhesive surfaces, polymers) were developed to create 3D multicellular structures. However, these traditional systems require large volumes of reagents and cells, and are not compatible with high throughput screening (HTS) systems. Microscale technology offers the miniaturization of 3D cultures and allows efficient screening of various conditions. This review will discuss the development, most influential works, and current advantages and challenges of microscale culture systems for screening cells in 3D microenvironments. PMID:25274061

  13. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  14. A Software System for Filling Complex Holes in 3D Meshes by Flexible Interacting Particles

    NASA Astrophysics Data System (ADS)

    Yamazaki, Daisuke; Savchenko, Vladimir

    3D meshes generated by acquisition devices such as laser range scanners often contain holes due to occlusion, etc. In practice, these holes are extremely geometrically and topologically complex. We propose a heuristic hole filling technique using particle systems to fill complex holes with arbitrary topology in 3D meshes. Our approach includes the following steps: hole identification, base surface creation, particle distribution, triangulation, and mesh refinement. We demonstrate the functionality of the proposed surface retouching system on synthetic and real data.

  15. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

    SciTech Connect

    James Menart

    2013-06-07

    This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

  16. 2D and 3D Mass Transfer Simulations in β Lyrae System

    NASA Astrophysics Data System (ADS)

    Nazarenko, V. V.; Glazunova, L. V.; Karetnikov, V. G.

    2001-12-01

    2D and 3D mass transfer simulations of the mass transfer in β Lyrae binary system. We have received that from a point L3 40 per cent of mass transfer from L1-point is lost.The structure of a gas envelope, around system is calculated.3-D mass transfer simulations has shown presence the spiral shock in the disk around primary star's and a jet-like structures (a mass flow in vertical direction) over a stream.

  17. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  18. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  19. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    NASA Astrophysics Data System (ADS)

    Kim, Chul; Rassau, Alex; Lachowicz, Stefan; Lee, Mike Myung-Ok; Eshraghian, Kamran

    2006-12-01

    This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D) vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch) through an indium bump interconnection array (IBIA). The configurable array processor (CAP) is an array of heterogeneous processing elements (PEs), while the intelligent configurable switch (ICS) comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA) controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  20. Global positioning system supported pilot's display

    NASA Technical Reports Server (NTRS)

    Scott, Marshall M., Jr.; Erdogan, Temel; Schwalb, Andrew P.; Curley, Charles H.

    1991-01-01

    The hardware, software, and operation of the Microwave Scanning Beam Landing System (MSBLS) Flight Inspection System Pilot's Display is discussed. The Pilot's Display is used in conjunction with flight inspection tests that certify the Microwave Scanning Beam Landing System used at Space Shuttle landing facilities throughout the world. The Pilot's Display was developed for the pilot of test aircraft to set up and fly a given test flight path determined by the flight inspection test engineers. This display also aids the aircraft pilot when hazy or cloud cover conditions exist that limit the pilot's visibility of the Shuttle runway during the flight inspection. The aircraft position is calculated using the Global Positioning System and displayed in the cockpit on a graphical display.

  1. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    SciTech Connect

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  2. Dual use display systems for telerobotics

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J.; Meschler, Michael F.; Rodriguez, Alberto A.

    1994-01-01

    This paper describes a telerobotics display system, the Multi-mode Manipulator Display System (MMDS), that has applications for a variety of remotely controlled tasks. Designed primarily to assist astronauts with the control of space robotics systems, the MMDS has applications for ground control of space robotics as well as for toxic waste cleanup, undersea, remotely operated vehicles, and other environments which require remote operations. The MMDS has three modes: (1) Manipulator Position Display (MPD) mode, (2) Joint Angle Display (JAD) mode, and (3) Sensory Substitution (SS) mode. These three modes are discussed in the paper.

  3. Cascaded systems analysis of the 3D NEQ for cone-beam CT and tomosynthesis

    NASA Astrophysics Data System (ADS)

    Tward, D. J.; Siewerdsen, J. H.; Fahrig, R. A.; Pineda, A. R.

    2008-03-01

    Crucial to understanding the factors that govern imaging performance is a rigorous analysis of signal and noise transfer characteristics (e.g., MTF, NPS, and NEQ) applied to a task-based performance metric (e.g., detectability index). This paper advances a theoretical framework for calculation of the NPS, NEQ, and DQE of cone-beam CT (CBCT) and tomosynthesis based on cascaded systems analysis. The model considers the 2D projection NPS propagated through a series of reconstruction stages to yield the 3D NPS, revealing a continuum (from 2D projection radiography to limited-angle tomosynthesis and fully 3D CBCT) for which NEQ and detectability index may be investigated as a function of any system parameter. Factors considered in the cascade include: system geometry; angular extent of source-detector orbit; finite number of views; log-scaling; application of ramp, apodization, and interpolation filters; back-projection; and 3D noise aliasing - all of which have a direct impact on the 3D NEQ and DQE. Calculations of the 3D NPS were found to agree with experimental measurements across a broad range of imaging conditions. The model presents a theoretical framework that unifies 3D Fourier-based performance metrology in tomosynthesis and CBCT, providing a guide to optimization that rigorously considers the system configuration, reconstruction parameters, and imaging task.

  4. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  5. High immersive three-dimensional tabletop display system with high dense light field reconstruction

    NASA Astrophysics Data System (ADS)

    Zheng, Mengqing; Yu, Xunbo; Xie, Songlin; Sang, Xinzhu; Yu, Chongxiu

    2014-11-01

    Three-dimensional (3D) tabletop display is a kind of display with wide range of potential applications. An auto-stereoscopic 3D tabletop display system is designed to provide the observers with high level of immersive perception. To improve the freedom of viewing position, the eye tracking system and a set of active partially pixelated masks are utilized. To improve the display quality, large number of images is prepared to generate the stereo pair. The light intensity distribution and crosstalk of parallax images are measured respectively to evaluate the rationality of the auto-stereoscopic system. In the experiment, the high immersive auto-stereoscopic tabletop display system is demonstrated, together with the system architectures including hardware and software. Experimental results illustrate the effectiveness of the high immersive auto-stereoscopic tabletop display system.

  6. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  7. Low-cost 3D systems: suitable tools for plant phenotyping.

    PubMed

    Paulus, Stefan; Behmann, Jan; Mahlein, Anne-Katrin; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor. PMID:24534920

  8. Low-Cost 3D Systems: Suitable Tools for Plant Phenotyping

    PubMed Central

    Paulus, Stefan; Behmann, Jan; Mahlein, Anne-Katrin; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor is able to replace an expensive laser scanner in many plant phenotyping scenarios. PMID:24534920

  9. Development of goniophotometric imaging system for recording reflectance spectra of 3D objects

    NASA Astrophysics Data System (ADS)

    Tonsho, Kazutaka; Akao, Y.; Tsumura, Norimichi; Miyake, Yoichi

    2001-12-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.

  10. Next generation 3-D OFDM based optical access networks using FEC under various system impairments

    NASA Astrophysics Data System (ADS)

    Kumar, Pravindra; Srivastava, Anand

    2013-12-01

    Passive optical network based on orthogonal frequency division multiplexing (OFDM-PON) exhibits excellent performance in optical access networks due to its greater resistance to fiber dispersion, high spectral efficiency and exibility on both multiple services and dynamic bandwidth allocation. The major elements of conventional OFDM communication system are two-dimensional (2-D) signal mapper and one-dimensional (1-D) inverse fast fourier transform (IFFT). Three dimensional (3-D) OFDM use the concept of 3-D signal mapper and 2-D IFFT. With 3-D OFDM, minimum Euclidean distance (MED) is increased which results in BER performance improvement. As bit error rate (BER) depends on minimum Euclidean distance (MED) which is 15.46 % more in case of 3-D OFDM as compared to 2-D OFDM. Forward error correction (FEC) coding is a technique where redundancy is added to original bit sequence to increase the reliability of communication system. In this paper, we propose and analytically analyze a new PON architecture based on 3-D OFDM with convolutional coding and Viterbi decoding and is compared with conventional 2-D OFDM under various system impairments for coherent optical orthogonal frequency division multiplexing (CO-OFDM) without using any optical dispersion compensation. Analytical result show that at BER of 10-9, there is 2.7 dB, 3.8 dB and 9.3 dB signal-to-noise ratio (SNR) gain with 3-D OFDM, 3-D OFDM combined with convolutional coding and Viterbi hard decision decoding (CC-HDD) and 3-D OFDM combined with convolutional coding and Viterbi soft decision decoding (CC-SDD) respectively as compared to 2-D OFDM-PON. At BER of 10-9, 3-D OFDM-PON with CC-HDD gives 2.8 dB improvement in optical budget for both upstream and downstream path and gives 5.7 dB improvement in optical budget using 3-D OFDM-PON combined with CC-SDD as compared to conventional OFDM-PON system.

  11. 2D and 3D Mechanobiology in Human and Nonhuman Systems.

    PubMed

    Warren, Kristin M; Islam, Md Mydul; LeDuc, Philip R; Steward, Robert

    2016-08-31

    Mechanobiology involves the investigation of mechanical forces and their effect on the development, physiology, and pathology of biological systems. The human body has garnered much attention from many groups in the field, as mechanical forces have been shown to influence almost all aspects of human life ranging from breathing to cancer metastasis. Beyond being influential in human systems, mechanical forces have also been shown to impact nonhuman systems such as algae and zebrafish. Studies of nonhuman and human systems at the cellular level have primarily been done in two-dimensional (2D) environments, but most of these systems reside in three-dimensional (3D) environments. Furthermore, outcomes obtained from 3D studies are often quite different than those from 2D studies. We present here an overview of a select group of human and nonhuman systems in 2D and 3D environments. We also highlight mechanobiological approaches and their respective implications for human and nonhuman physiology. PMID:27214883

  12. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  13. Error analysis of a 3D imaging system based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Dai, Jie

    2013-12-01

    In the past few years, optical metrology has found numerous applications in scientific and commercial fields owing to its non-contact nature. One of the most popular methods is the measurement of 3D surface based on fringe projection techniques because of the advantages of non-contact operation, full-field and fast acquisition and automatic data processing. In surface profilometry by using digital light processing (DLP) projector, many factors affect the accuracy of 3D measurement. However, there is no research to give the complete error analysis of a 3D imaging system. This paper will analyze some possible error sources of a 3D imaging system, for example, nonlinear response of CCD camera and DLP projector, sampling error of sinusoidal fringe pattern, variation of ambient light and marker extraction during calibration. These error sources are simulated in a software environment to demonstrate their effects on measurement. The possible compensation methods are proposed to give high accurate shape data. Some experiments were conducted to evaluate the effects of these error sources on 3D shape measurement. Experimental results and performance evaluation show that these errors have great effect on measuring 3D shape and it is necessary to compensate for them for accurate measurement.

  14. Advanced resin systems and 3D textile preforms for low cost composite structures

    NASA Technical Reports Server (NTRS)

    Shukla, J. G.; Bayha, T. D.

    1993-01-01

    Advanced resin systems and 3D textile preforms are being evaluated at Lockheed Aeronautical Systems Company (LASC) under NASA's Advanced Composites Technology (ACT) Program. This work is aimed towards the development of low-cost, damage-tolerant composite fuselage structures. Resin systems for resin transfer molding and powder epoxy towpreg materials are being evaluated for processability, performance and cost. Three developmental epoxy resin systems for resin transfer molding (RTM) and three resin systems for powder towpregging are being investigated. Various 3D textile preform architectures using advanced weaving and braiding processes are also being evaluated. Trials are being conducted with powdered towpreg, in 2D weaving and 3D braiding processes for their textile processability and their potential for fabrication in 'net shape' fuselage structures. The progress in advanced resin screening and textile preform development is reviewed here.

  15. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max). PMID:18044549

  16. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  17. A Three-Dimensional Display System Of Ct Images For Surgical Planning

    NASA Astrophysics Data System (ADS)

    Yasuda, Takami; Toriwaki, Jun-ichiro; Yokoi, Shigeki; Katada, Kazuhiro

    1984-08-01

    This paper presents a system for display-ing three-dimensional images of human brain structure by reconstructing them from CT image sequences. Functions of the system include : reslicing (pro-ducing cross-sectional images along an arbitrary plane); generation of 3D shaded surfaces of the skull, ventricle and disease lesions (hematoma and tumor etc.), windowing, and translucent display of the skull. All these functions can be combined in an arbitrary way to generate complicated 3D images. The system is expected to be useful especially for surgical planning and education in medicine.

  18. Moving from Batch to Field Using the RT3D Reactive Transport Modeling System

    NASA Astrophysics Data System (ADS)

    Clement, T. P.; Gautam, T. R.

    2002-12-01

    The public domain reactive transport code RT3D (Clement, 1997) is a general-purpose numerical code for solving coupled, multi-species reactive transport in saturated groundwater systems. The code uses MODFLOW to simulate flow and several modules of MT3DMS to simulate the advection and dispersion processes. RT3D employs the operator-split strategy which allows the code solve the coupled reactive transport problem in a modular fashion. The coupling between reaction and transport is defined through a separate module where the reaction equations are specified. The code supports a versatile user-defined reaction option that allows users to define their own reaction system through a Fortran-90 subroutine, known as the RT3D-reaction package. Further a utility code, known as BATCHRXN, allows the users to independently test and debug their reaction package. To analyze a new reaction system at a batch scale, users should first run BATCHRXN to test the ability of their reaction package to model the batch data. After testing, the reaction package can simply be ported to the RT3D environment to study the model response under 1-, 2-, or 3-dimensional transport conditions. This paper presents example problems that demonstrate the methods for moving from batch to field-scale simulations using BATCHRXN and RT3D codes. The first example describes a simple first-order reaction system for simulating the sequential degradation of Tetrachloroethene (PCE) and its daughter products. The second example uses a relatively complex reaction system for describing the multiple degradation pathways of Tetrachloroethane (PCA) and its daughter products. References 1) Clement, T.P, RT3D - A modular computer code for simulating reactive multi-species transport in 3-Dimensional groundwater aquifers, Battelle Pacific Northwest National Laboratory Research Report, PNNL-SA-28967, September, 1997. Available at: http://bioprocess.pnl.gov/rt3d.htm.

  19. A molecular image-directed, 3D ultrasound-guided biopsy system for the prostate

    NASA Astrophysics Data System (ADS)

    Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

    2012-02-01

    Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound imageguided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% +/- 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.

  20. Three Dimensional Rover/Lander/Orbiter Mission-Planning (3D-ROMPS) System: A Modern Approach to Mission Planning

    NASA Technical Reports Server (NTRS)

    Scharfe, Nathan D.

    2005-01-01

    NASA's current mission planning system is based on point design, two-dimensional display, spread sheets, and report technology. This technology does not enable engineers to analyze the results of parametric studies of missions plans. This technology will not support the increased observational complexity and data volume of missions like Cassini, Mars Reconnaissance Orbiter (MRO), Mars Science Laboratory (MSL), and Mars Sample Return (MSR). The goal of the 3D-ROMPS task has been to establish a set of operational mission planning and analysis tools in the Image Processing Laboratory (IPL) Mission Support Area (MSA) that will respond to engineering requirements for planning future Solar System Exploration (SSE) missions using a three-dimensional display.

  1. Force/Torque Display For Telerobotic Systems

    NASA Technical Reports Server (NTRS)

    Wise, Marion A.

    1989-01-01

    Pictorial cathode-ray-tube (CRT) display of force and/or torque (F/T) data for telerobotic systems used as output monitor from multiaxis sensor or as command display. Relative positions of two circles represent forces and torques acting on object, derived from signals from F/T sensor composed of strain gauges. Graphical presentation generated on two different graphics systems, one in color and one in black and white. High-level programming facilitates use of additional convenient features in software extending usefulness of sensor data and display. Useful in laboratory experiments, monitoring performance of automated system and for present data on status of system to operator at control station.

  2. A web-based 3D medical image collaborative processing system with videoconference

    NASA Astrophysics Data System (ADS)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  3. Real-time pickup and display integral imaging system without pseudoscopic problem

    NASA Astrophysics Data System (ADS)

    Kim, Jonghyun; Jung, Jae-Hyun; Lee, Byoungho

    2013-03-01

    We propose a novel real-time pickup and display of integral imaging system using only a lens array and a high speed charge coupled device (CCD). A simple lens array and a high speed CCD can capture 3D information of the object and a commercial liquid crystal (LC) display panel shows the elemental image in real-time. Reconstructed image is real and orthographic so that the observer can touch the 3D image. Furthermore, our system is free from pseudoscopic problem by adopting recent pixel mapping algorithm. This algorithm, based on image interweaving process, can also change the depth plane of the displayed 3D images in real-time. C++ programming is used for real-time capturing, image processing, and display. For real-time high quality 3D video generation, a high resolution and high frame rate CCD (AVT Prosilica GX2300C) and LC display panel (IBM 22inch 3840×2400) are used in proposed system. Proper simulation and experiment are presented to verify our proposed system. We expect that our research can be the basic technology for real-time 3D broadcasting and interactive 3D technology.

  4. Morphological and Volumetric Assessment of Cerebral Ventricular System with 3D Slicer Software.

    PubMed

    Gonzalo Domínguez, Miguel; Hernández, Cristina; Ruisoto, Pablo; Juanes, Juan A; Prats, Alberto; Hernández, Tomás

    2016-06-01

    We present a technological process based on the 3D Slicer software for the three-dimensional study of the brain's ventricular system with teaching purposes. It values the morphology of this complex brain structure, as a whole and in any spatial position, being able to compare it with pathological studies, where its anatomy visibly changes. 3D Slicer was also used to obtain volumetric measurements in order to provide a more comprehensive and detail representation of the ventricular system. We assess the potential this software has for processing high resolution images, taken from Magnetic Resonance and generate the three-dimensional reconstruction of ventricular system. PMID:27147517

  5. Holographic and weak-phase projection system for 3D shape reconstruction using temporal phase unwrapping

    NASA Astrophysics Data System (ADS)

    González, C. A.; Dávila, A.; Garnica, G.

    2007-09-01

    Two projection systems that use an LCoS phase modulator are proposed for 3D shape reconstruction. The LCoS is used as an holographic system or as a weak phase projector, both configurations project a set of fringe patterns that are processed by the technique known as temporal phase unwrapping. To minimize the influence of camera sampling, and the speckle noise in the projected fringes, an speckle noise reduction technique is applied to the speckle patterns generated by the holographic optical system. Experiments with 3D shape reconstruction of ophthalmic mold and other testing specimens show the viability of the proposed techniques.

  6. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  7. Multimodal 3D PET/CT system for bronchoscopic procedure planning

    NASA Astrophysics Data System (ADS)

    Cheirsilp, Ronnarit; Higgins, William E.

    2013-02-01

    Integrated positron emission tomography (PET) / computed-tomography (CT) scanners give 3D multimodal data sets of the chest. Such data sets offer the potential for more complete and specific identification of suspect lesions and lymph nodes for lung-cancer assessment. This in turn enables better planning of staging bronchoscopies. The richness of the data, however, makes the visualization and planning process difficult. We present an integrated multimodal 3D PET/CT system that enables efficient region identification and bronchoscopic procedure planning. The system first invokes a series of automated 3D image-processing methods that construct a 3D chest model. Next, the user interacts with a set of interactive multimodal graphical tools that facilitate procedure planning for specific regions of interest (ROIs): 1) an interactive region candidate list that enables efficient ROI viewing in all tools; 2) a virtual PET-CT bronchoscopy rendering with SUV quantitative visualization to give a "fly through" endoluminal view of prospective ROIs; 3) transverse, sagittal, coronal multi-planar reformatted (MPR) views of the raw CT, PET, and fused CT-PET data; and 4) interactive multimodal volume/surface rendering to give a 3D perspective of the anatomy and candidate ROIs. In addition the ROI selection process is driven by a semi-automatic multimodal method for region identification. In this way, the system provides both global and local information to facilitate more specific ROI identification and procedure planning. We present results to illustrate the system's function and performance.

  8. Blood pressure measurement and display system

    NASA Technical Reports Server (NTRS)

    Farkas, A. J.

    1972-01-01

    System is described that employs solid state circuitry to transmit visual display of patient's blood pressure. Response of sphygmomanometer cuff and microphone provide input signals. Signals and their amplitudes, from turn-on time to turn-off time, are continuously fed to data transmitter which transmits to display device.

  9. 3-D System-on-System (SoS) Biomedical-Imaging Architecture for Health-Care Applications.

    PubMed

    Sang-Jin Lee; Kavehei, O; Yoon-Ki Hong; Tae Won Cho; Younggap You; Kyoungrok Cho; Eshraghian, K

    2010-12-01

    This paper presents the implementation of a 3-D architecture for a biomedical-imaging system based on a multilayered system-on-system structure. The architecture consists of a complementary metal-oxide semiconductor image sensor layer, memory, 3-D discrete wavelet transform (3D-DWT), 3-D Advanced Encryption Standard (3D-AES), and an RF transmitter as an add-on layer. Multilayer silicon (Si) stacking permits fabrication and optimization of individual layers by different processing technology to achieve optimal performance. Utilization of through silicon via scheme can address required low-power operation as well as high-speed performance. Potential benefits of 3-D vertical integration include an improved form factor as well as a reduction in the total wiring length, multifunctionality, power efficiency, and flexible heterogeneous integration. The proposed imaging architecture was simulated by using Cadence Spectre and Synopsys HSPICE while implementation was carried out by Cadence Virtuoso and Mentor Graphic Calibre. PMID:23853380

  10. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  11. Hydra 1 data display system

    NASA Technical Reports Server (NTRS)

    Hodgkins, R. L.; Osgood, D. R.

    1968-01-01

    System, named Hydra, generates charts, graphs, and printed matter on slides or conventional negatives and positives, and combines these media with a capability of storage on magnetic tape for future updating to accommodate engineering changes or contract modifications to be readily added to basic data.

  12. Flexible 3D reconstruction method based on phase-matching in multi-sensor system.

    PubMed

    Wu, Qingyang; Zhang, Baichun; Huang, Jinhui; Wu, Zejun; Zeng, Zeng

    2016-04-01

    Considering the measuring range limitation of a single sensor system, multi-sensor system has become essential in obtaining complete image information of the object in the field of 3D image reconstruction. However, for the traditional multi-sensors worked independently in its system, there was some point in calibrating each sensor system separately. And the calibration between all single sensor systems was complicated and required a long time. In this paper, we present a flexible 3D reconstruction method based on phase-matching in multi-sensor system. While calibrating each sensor, it realizes the data registration of multi-sensor system in a unified coordinate system simultaneously. After all sensors are calibrated, the whole 3D image data directly exist in the unified coordinate system, and there is no need to calibrate the positions between sensors any more. Experimental results prove that the method is simple in operation, accurate in measurement, and fast in 3D image reconstruction. PMID:27137020

  13. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble

    PubMed Central

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size—a few microns in diameter—and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  14. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  15. The in-situ 3D measurement system combined with CNC machine tools

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Jiang, Hongzhi; Li, Xudong; Sui, Shaochun; Tang, Limin; Liang, Xiaoyue; Diao, Xiaochun; Dai, Jiliang

    2013-06-01

    With the development of manufacturing industry, the in-situ 3D measurement for the machining workpieces in CNC machine tools is regarded as the new trend of efficient measurement. We introduce a 3D measurement system based on the stereovision and phase-shifting method combined with CNC machine tools, which can measure 3D profile of the machining workpieces between the key machining processes. The measurement system utilizes the method of high dynamic range fringe acquisition to solve the problem of saturation induced by specular lights reflected from shiny surfaces such as aluminum alloy workpiece or titanium alloy workpiece. We measured two workpieces of aluminum alloy on the CNC machine tools to demonstrate the effectiveness of the developed measurement system.

  16. Practical Considerations For A Design Of A High Precision 3-D Laser Scanner System

    NASA Astrophysics Data System (ADS)

    Blais, Francois; Rioux, Marc; Beraldin, J.-Angelo

    1988-11-01

    The Laboratory for Intelligent Systems of the Division of Electrical Engineering of the National Research Council of Canada is intensively involved in the development of laser-based three-dimensional vision systems and their applications. Two basic systems have been invented. One, based on a double aperture mask in front of a CCD camera, has been developed for robotic applications and control. The other technique is based on an auto-synchronized scanning principle to provide accurate, fast, and reliable 3-D coordinates. Using the latter method, several prototypes have been developed for the acquisition of 3-D data of objects and for inspection. This paper will describe some practical considerations for the design and implementation of triangulation-based 3-D range sensors with emphasis on the latter triangulation technique. Some applications and results will be presented.

  17. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    NASA Astrophysics Data System (ADS)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets

  18. Dose Verification of Stereotactic Radiosurgery Treatment for Trigeminal Neuralgia with Presage 3D Dosimetry System

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Thomas, A.; Newton, J.; Ibbott, G.; Deasy, J.; Oldham, M.

    2010-11-01

    Achieving adequate verification and quality-assurance (QA) for radiosurgery treatment of trigeminal-neuralgia (TGN) is particularly challenging because of the combination of very small fields, very high doses, and complex irradiation geometries (multiple gantry and couch combinations). TGN treatments have extreme requirements for dosimetry tools and QA techniques, to ensure adequate verification. In this work we evaluate the potential of Presage/Optical-CT dosimetry system as a tool for the verification of TGN distributions in high-resolution and in 3D. A TGN treatment was planned and delivered to a Presage 3D dosimeter positioned inside the Radiological-Physics-Center (RPC) head and neck IMRT credentialing phantom. A 6-arc treatment plan was created using the iPlan system, and a maximum dose of 80Gy was delivered with a Varian Trilogy machine. The delivered dose to Presage was determined by optical-CT scanning using the Duke Large field-of-view Optical-CT Scanner (DLOS) in 3D, with isotropic resolution of 0.7mm3. DLOS scanning and reconstruction took about 20minutes. 3D dose comparisons were made with the planning system. Good agreement was observed between the planned and measured 3D dose distributions, and this work provides strong support for the viability of Presage/Optical-CT as a highly useful new approach for verification of this complex technique.

  19. Second order superintegrable systems in conformally flat spaces. IV. The classical 3D Staeckel transform and 3D classification theory

    SciTech Connect

    Kalnins, E.G.; Kress, J.M.; Miller, W. Jr.

    2006-04-15

    This article is one of a series that lays the groundwork for a structure and classification theory of second order superintegrable systems, both classical and quantum, in conformally flat spaces. In the first part of the article we study the Staeckel transform (or coupling constant metamorphosis) as an invertible mapping between classical superintegrable systems on different three-dimensional spaces. We show first that all superintegrable systems with nondegenerate potentials are multiseparable and then that each such system on any conformally flat space is Staeckel equivalent to a system on a constant curvature space. In the second part of the article we classify all the superintegrable systems that admit separation in generic coordinates. We find that there are eight families of these systems.

  20. Low-cost structured-light based 3D capture system design

    NASA Astrophysics Data System (ADS)

    Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.

    2014-03-01

    Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.

  1. Three-dimensional display system for medical imaging with computer-generated integral photography

    NASA Astrophysics Data System (ADS)

    Nakajima, Susumu; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi

    2000-05-01

    A 3D display system for medical image by computer-generated integral photography (IP) has been developed. A new, fast, 3D-rendering algorithm has been devised to overcome the difficulties that have prevented practical application of computer-generated IP, namely, the cost of computation, and the pseudoscopic image problem. The display system as developed requires on ly a personal computer, a liquid crystal display (LCD), and a fly's eye lens (FEL). Each point in 3D space is reconstructed by the convergence of rays from many pixels on the LCD through the FEL. As the number of such points is limited by the low resolution of the LCD, the algorithm computes a coordinate of the best point for each pixel of the LCD. This reduces computation, performs hidden surface removal and solves the pseudoscopic image problem. In tests of the system, the locations of images projected 10-40 mm distant from the display were found to be less than 2.5 mm in error. Both stationary and moving IP images of a colored skull, generated from 3D computerized tomography, were projected and could be observed with motion parallax within 10 degrees, both horizontally and vertically, from the front of the display. It can be concluded that the simplicity of design and the geometrical accuracy of projection give this system significant advantages over other 3D display methods.

  2. Investigation of Presage 3D Dosimetry as a Method of Clinically Intuitive Quality Assurance and Comparison to a Semi-3D Delta4 System

    NASA Astrophysics Data System (ADS)

    Crockett, Ethan Van

    The need for clinically intuitive metrics for patient-specific quality assurance in radiation therapy has been well-documented (Zhen, Nelms et al. 2011). A novel transform method has shown to be effective at converting full-density 3D dose measurements made in a phantom to dose values in the patient geometry, enabling comparisons using clinically intuitive metrics such as dose-volume histograms (Oldham et al. 2011). This work investigates the transform method and compares its calculated dose-volume histograms (DVHs) to DVH values calculated by a Delta4 QA device (Scandidos), marking the first comparison of a true 3D system to a semi-3D device using clinical metrics. Measurements were made using Presage 3D dosimeters, which were readout by an in-house optical-CT scanner. Three patient cases were chosen for the study: one head-and-neck VMAT treatment and two spine IMRT treatments. The transform method showed good agreement with the planned dose values for all three cases. Furthermore, the transformed DVHs adhered to the planned dose with more accuracy than the Delta4 DVHs. The similarity between the Delta4 DVHs and the transformed DVHs, however, was greater for one of the spine cases than it was for the head-and-neck case, implying that the accuracy of the Delta4 Anatomy software may vary from one treatment site to another. Overall, the transform method, which incorporates data from full-density 3D dose measurements, provides clinically intuitive results that are more accurate and consistent than the corresponding results from a semi-3D Delta 4 system.

  3. Unified framework for generation of 3D web visualization for mechatronic systems

    NASA Astrophysics Data System (ADS)

    Severa, O.; Goubej, M.; Konigsmarkova, J.

    2015-11-01

    The paper deals with development of a unified framework for generation of 3D visualizations of complex mechatronic systems. It provides a high-fidelity representation of executed motion by allowing direct employment of a machine geometry model acquired from a CAD system. Open-architecture multi-platform solution based on latest web standards is achieved by utilizing a web browser as a final 3D renderer. The results are applicable both for simulations and development of real-time human machine interfaces. Case study of autonomous underwater vehicle control is provided to demonstrate the applicability of the proposed approach.

  4. [Review of visual display system in flight simulator].

    PubMed

    Xie, Guang-hui; Wei, Shao-ning

    2003-06-01

    Visual display system is the key part and plays a very important role in flight simulators and flight training devices. The developing history of visual display system is recalled and the principle and characters of some visual display systems including collimated display systems and back-projected collimated display systems are described. The future directions of visual display systems are analyzed. PMID:12934618

  5. A Comparison of the Perceptual Benefits of Linear Perspective and Physically-Based Illumination for Display of Dense 3D Streamtubes

    SciTech Connect

    Banks, David C

    2008-01-01

    Large datasets typically contain coarse features comprised of finer sub-features. Even if the shapes of the small structures are evident in a 3D display, the aggregate shapes they suggest may not be easily inferred. From previous studies in shape perception, the evidence has not been clear whether physically-based illumination confers any advantage over local illumination for understanding scenes that arise in visualization of large data sets that contain features at two distinct scales. In this paper we show that physically- based illumination can improve the perception for some static scenes of complex 3D geometry from flow fields. We perform human- subjects experiments to quantify the effect of physically-based illumination on participant performance for two tasks: selecting the closer of two streamtubes from a field of tubes, and identifying the shape of the domain of a flow field over different densities of tubes. We find that physically-based illumination influences participant performance as strongly as perspective projection, suggesting that physically-based illumination is indeed a strong cue to the layout of complex scenes. We also find that increasing the density of tubes for the shape identification task improved participant performance under physically-based illumination but not under the traditional hardware-accelerated illumination model.

  6. Mobile 3d Mapping with a Low-Cost Uav System

    NASA Astrophysics Data System (ADS)

    Neitzel, F.; Klonowski, J.

    2011-09-01

    In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.

  7. Using a 3D Culture System to Differentiate Visceral Adipocytes In Vitro.

    PubMed

    Emont, Margo P; Yu, Hui; Jun, Heejin; Hong, Xiaowei; Maganti, Nenita; Stegemann, Jan P; Wu, Jun

    2015-12-01

    It has long been recognized that body fat distribution and regional adiposity play a major role in the control of metabolic homeostasis. However, the ability to study and compare the cell autonomous regulation and response of adipocytes from different fat depots has been hampered by the difficulty of inducing preadipocytes isolated from the visceral depot to differentiate into mature adipocytes in culture. Here, we present an easily created 3-dimensional (3D) culture system that can be used to differentiate preadipocytes from the visceral depot as robustly as those from the sc depot. The cells differentiated in these 3D collagen gels are mature adipocytes that retain depot-specific characteristics, as determined by imaging, gene expression, and functional assays. This 3D culture system therefore allows for study of the development and function of adipocytes from both depots in vitro and may ultimately lead to a greater understanding of site-specific functional differences of adipose tissues to metabolic dysregulation. PMID:26425808

  8. [Odor sensing system and olfactory display].

    PubMed

    Nakamoto, Takamichi

    2014-01-01

    In this review, an odor sensing system and an olfactory display are introduced into people in pharmacy. An odor sensing system consists of an array of sensors with partially overlapping specificities and pattern recognition technique. One of examples of odor sensing systems is a halitosis sensor which quantifies the mixture composition of three volatile sulfide compounds. A halitosis sensor was realized using a preconcentrator to raise sensitivity and an electrochemical sensor array to suppress the influence of humidity. Partial least squares (PLS) method was used to quantify the mixture composition. The experiment reveals that the sufficient accuracy was obtained. Moreover, the olfactory display, which present scents to human noses, is explained. A multi-component olfactory display enables the presentation of a variety of smells. The two types of multi-component olfactory display are described. The first one uses many solenoid valves with high speed switching. The valve ON frequency determines the concentration of the corresponding odor component. The latter one consists of miniaturized liquid pumps and a surface acoustic wave (SAW) atomizer. It enables the wearable olfactory display without smell persistence. Finally, the application of the olfactory display is demonstrated. Virtual ice cream shop with scents was made as a content of interactive art. People can enjoy harmony among vision, audition and olfaction. In conclusion, both odor sensing system and olfactory display can contribute to the field of human health care. PMID:24584010

  9. Development of a 3D ultrasound-guided prostate biopsy system

    NASA Astrophysics Data System (ADS)

    Cool, Derek; Sherebrin, Shi; Izawa, Jonathan; Fenster, Aaron

    2007-03-01

    Biopsy of the prostate using ultrasound guidance is the clinical gold standard for diagnosis of prostate adenocarinoma. However, because early stage tumors are rarely visible under US, the procedure carries high false-negative rates and often patients require multiple biopsies before cancer is detected. To improve cancer detection, it is imperative that throughout the biopsy procedure, physicians know where they are within the prostate and where they have sampled during prior biopsies. The current biopsy procedure is limited to using only 2D ultrasound images to find and record target biopsy core sample sites. This information leaves ambiguity as the physician tries to interpret the 2D information and apply it to their 3D workspace. We have developed a 3D ultrasound-guided prostate biopsy system that provides 3D intra-biopsy information to physicians for needle guidance and biopsy location recording. The system is designed to conform to the workflow of the current prostate biopsy procedure, making it easier for clinical integration. In this paper, we describe the system design and validate its accuracy by performing an in vitro biopsy procedure on US/CT multi-modal patient-specific prostate phantoms. A clinical sextant biopsy was performed by a urologist on the phantoms and the 3D models of the prostates were generated with volume errors less than 4% and mean boundary errors of less than 1 mm. Using the 3D biopsy system, needles were guided to within 1.36 +/- 0.83 mm of 3D targets and the position of the biopsy sites were accurately localized to 1.06 +/- 0.89 mm for the two prostates.

  10. INFORMATION DISPLAY: CONSIDERATIONS FOR DESIGNING COMPUTER-BASED DISPLAY SYSTEMS.

    SciTech Connect

    O'HARA,J.M.; PIRUS,D.; BELTRATCCHI,L.

    2004-09-19

    This paper discussed the presentation of information in computer-based control rooms. Issues associated with the typical displays currently in use are discussed. It is concluded that these displays should be augmented with new displays designed to better meet the information needs of plant personnel and to minimize the need for interface management tasks (the activities personnel have to do to access and organize the information they need). Several approaches to information design are discussed, specifically addressing: (1) monitoring, detection, and situation assessment; (2) routine task performance; and (3) teamwork, crew coordination, collaborative work.

  11. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  12. Biomek Cell Workstation: A Flexible System for Automated 3D Cell Cultivation.

    PubMed

    Lehmann, R; Gallert, C; Roddelkopf, T; Junginger, S; Thurow, K

    2016-08-01

    The shift from 2D cultures to 3D cultures enables improvement in cell culture research due to better mimicking of in vivo cell behavior and environmental conditions. Different cell lines and applications require altered 3D constructs. The automation of the manufacturing and screening processes can advance the charge stability, quality, repeatability, and precision. In this study we integrated the automated production of three 3D cell constructs (alginate beads, spheroid cultures, pellet cultures) using the Biomek Cell Workstation and compared them with the traditional manual methods and their consequent bioscreening processes (proliferation, toxicity; days 14 and 35) using a high-throughput screening system. Moreover, the possible influence of antibiotics (penicillin/streptomycin) on the production and screening processes was investigated. The cytotoxicity of automatically produced 3D cell cultures (with and without antibiotics) was mainly decreased. The proliferation showed mainly similar or increased results for the automatically produced 3D constructs. We concluded that the traditional manual methods can be replaced by the automated processes. Furthermore, the formation, cultivation, and screenings can be performed without antibiotics to prevent possible effects. PMID:26203054

  13. The Backlight Control System Aimed at Reducing Crosstalk in Autostereoscopic Displays

    NASA Astrophysics Data System (ADS)

    Xue, Yalan; Wang, Yuanqing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-06-01

    In order to realize auto-stereoscopic display, a new directional optic structure was proposed in this article, by providing only a pair of parallax images time-sequentially, aided with the human eye tracking system, a multi-user stereo-parallax 3D displayer with full resolution was invented. Meanwhile, the backlight control system of this displayer based on the simple microcontroller was mainly focused, in view of the crosstalk existed in 3D displays, three effective methods to reduce crosstalk were put forward: reduced lit time ratio of the directional backlight, faster refresh LCD and the application of Kalman prediction interpolation. As the experimental results shows, these methods achieve a good performance in reducing crosstalk, and finally a strong 3D visual effect without loss of resolution is achieved.

  14. Design of virtual display and testing system for moving mass electromechanical actuator

    NASA Astrophysics Data System (ADS)

    Gao, Zhigang; Geng, Keda; Zhou, Jun; Li, Peng

    2015-12-01

    Aiming at the problem of control, measurement and movement virtual display of moving mass electromechanical actuator(MMEA), the virtual testing system of MMEA was developed based on the PC-DAQ architecture and the software platform of LabVIEW, and the comprehensive test task such as drive control of MMEA, tests of kinematic parameter, measurement of centroid position and virtual display of movement could be accomplished. The system could solve the alignment for acquisition time between multiple measurement channels in different DAQ cards, then on this basis, the researches were focused on the dynamic 3D virtual display by the LabVIEW, and the virtual display of MMEA were realized by the method of calling DLL and the method of 3D graph drawing controls. Considering the collaboration with the virtual testing system, including the hardware drive, the measurement software of data acquisition, and the 3D graph drawing controls method was selected, which could obtained the synchronization measurement, control and display. The system can measure dynamic centroid position and kinematic position of movable mass block while controlling the MMEA, and the interface of 3D virtual display has realistic effect and motion smooth, which can solve the problem of display and playback about MMEA in the closed shell.

  15. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems.

    PubMed

    Jung, Jaehoon; Kim, Jeonghyun; Yoon, Sanghyun; Kim, Sangmin; Cho, Hyoungsig; Kim, Changjae; Heo, Joon

    2015-01-01

    The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system's trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach. PMID:25946627

  16. 3D Game-Based Learning System for Improving Learning Achievement in Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Su,Chung-Ho; Cheng, Ching-Hsue

    2013-01-01

    The advancement of game-based learning has encouraged many related studies, such that students could better learn curriculum by 3-dimension virtual reality. To enhance software engineering learning, this paper develops a 3D game-based learning system to assist teaching and assess the students' motivation, satisfaction and learning…

  17. Development of an accurate 3D blood vessel searching system using NIR light

    NASA Astrophysics Data System (ADS)

    Mizuno, Yoshifumi; Katayama, Tsutao; Nakamachi, Eiji

    2010-02-01

    Health monitoring system (HMS) and drug delivery system (DDS) require accurate puncture by needle for automatic blood sampling. In this study, we develop a miniature and high accurate automatic 3D blood vessel searching system. The size of detecting system is 40x25x10 mm. Our searching system use Near-Infrared (NIR) LEDs, CMOS camera modules and image processing units. We employ the stereo method for searching system to determine 3D blood vessel location. Blood vessel visualization system adopts hemoglobin's absorption characterization of NIR light. NIR LED is set behind the finger and it irradiates Near Infrared light for the finger. CMOS camera modules are set in front of the finger and it captures clear blood vessel images. Two dimensional location of the blood vessel is detected by luminance distribution of the image and its depth is calculated by the stereo method. 3D blood vessel location is automatically detected by our image processing system. To examine the accuracy of our detecting system, we carried out experiments using finger phantoms with blood vessel diameters, 0.5, 0.75, 1.0mm, at the depths, 0.5 ~ 2.0 mm, under the artificial tissue surface. Experimental results of depth obtained by our detecting system showed good agreements with given depths, and the availability of this system is confirmed.

  18. A framework for human spine imaging using a freehand 3D ultrasound system.

    PubMed

    Purnama, Ketut E; Wilkinson, Michael H F; Veldhuizen, Albert G; van Ooijen, Peter M A; Lubbers, Jaap; Burgerhof, Johannes G M; Sardjono, Tri A; Verkerke, Gijbertus J

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis which sometimes may develop rapidly. Furthermore, 3D ultrasound imaging provides information in 3D directly in contrast to projection methods. This paper describes a feasibility study of an ultrasound system to provide a 3D image of the human spine, and presents a framework of procedures to perform this task. The framework consist of an ultrasound image acquisition procedure to image a large part of the human spine by means of a freehand 3D ultrasound system and a volume reconstruction procedure which was performed in four stages: bin-filling, hole-filling, volume segment alignment, and volume segment compounding. The overall results of the procedures in this framework show that imaging of the human spine using ultrasound is feasible. Vertebral parts such as the transverse processes, laminae, superior articular processes, and spinous process of the vertebrae appear as clouds of voxels having intensities higher than the surrounding voxels. In sagittal slices, a string of transverse processes appears representing the curvature of the spine. In the bin-filling stage the estimated mean absolute noise level of a single measurement of a single voxel was determined. Our comparative study for the hole-filling methods based on rank sum statistics proved that the pixel nearest neighbour (PNN) method with variable radius and with the proposed olympic operation is the best method. Its mean absolute grey value error was less in magnitude than the noise level of a single measurement. PMID:20231799

  19. High-quality 3-D coronary artery imaging on an interventional C-arm x-ray system

    SciTech Connect

    Hansis, Eberhard; Carroll, John D.; Schaefer, Dirk; Doessel, Olaf; Grass, Michael

    2010-04-15

    Purpose: Three-dimensional (3-D) reconstruction of the coronary arteries during a cardiac catheter-based intervention can be performed from a C-arm based rotational x-ray angiography sequence. It can support the diagnosis of coronary artery disease, treatment planning, and intervention guidance. 3-D reconstruction also enables quantitative vessel analysis, including vessel dynamics from a time-series of reconstructions. Methods: The strong angular undersampling and motion effects present in gated cardiac reconstruction necessitate the development of special reconstruction methods. This contribution presents a fully automatic method for creating high-quality coronary artery reconstructions. It employs a sparseness-prior based iterative reconstruction technique in combination with projection-based motion compensation. Results: The method is tested on a dynamic software phantom, assessing reconstruction accuracy with respect to vessel radii and attenuation coefficients. Reconstructions from clinical cases are presented, displaying high contrast, sharpness, and level of detail. Conclusions: The presented method enables high-quality 3-D coronary artery imaging on an interventional C-arm system.

  20. a 3d Information System for the Documentation of Archaeologica L Excavations

    NASA Astrophysics Data System (ADS)

    Ardissone, P.; Bornaz, L.; Degattis, G.; Domaine, R.

    2013-07-01

    these methodologies and procedures will be presented and described in the article. For the documentation of the archaeological excavations and for the management of the conservation activities (condition assessment, planning, and conservation work). Ad Hoc 3D solutions has costumized 2 special plug-ins of its own software platform Ad Hoc: Ad Hoc Archaeology and Ad Hoc Conservation. The software platform integrates a 3D database management system. All information (measurements, plotting, areas of interests…) are organized according to their correct 3D position. They can be queried using attributes, geometric characteristics or their spatial position. The Ad Hoc Archaeology plug-in allows archeologists to fill out UUSS sheets in an internal database, put them in the correct location within the 3D model of the site, define the mutual relations between the UUSS, divide the different archaeological phases. A simple interface will facilitate the construction of the stratigraphic chart (matrix), in a 3D environment as well (matrix 3D). The Ad Hoc Conservation plug-in permits conservators and restorers to create relationships between the different approaches and descriptions of the same parts of the monument, i.e.: between stratigraphyc units or historical phases and architectural components and/or decay pathologies. The 3D DBMS conservation module uses a codified terminology based on "ICOMOS illustrated glossary of stone deterioration" and other glossary. Specific tools permits restorers to compute correctly surfaces and volumes. In this way decay extension and intensity can be measured with high precision and with an high level of detail, for a correct time and costs estimation of each conservation step.

  1. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  2. 3D homogeneity study in PMMA layers using a Fourier domain OCT system

    NASA Astrophysics Data System (ADS)

    Briones-R., Manuel de J.; Torre-Ibarra, Manuel H. De La; Tavera, Cesar G.; Luna H., Juan M.; Mendoza-Santoyo, Fernando

    2016-11-01

    Micro-metallic particles embedded in polymers are now widely used in several industrial applications in order to modify the mechanical properties of the bulk. A uniform distribution of these particles inside the polymers is highly desired for instance, when a biological backscattering is simulated or a bio-framework is designed. A 3D Fourier domain optical coherence tomography system to detect the polymer's internal homogeneity is proposed. This optical system has a 2D camera sensor array that records a fringe pattern used to reconstruct with a single shot the tomographic image of the sample. The system gathers the full 3D tomographic and optical phase information during a controlled deformation by means of a motion linear stage. This stage avoids the use of expensive tilting stages, which in addition are commonly controlled by piezo drivers. As proof of principle, a series of different deformations were proposed to detect the uniform or non-uniform internal deposition of copper micro particles. The results are presented as images coming from the 3D tomographic micro reconstruction of the samples, and the 3D optical phase information that identifies the in-homogeneity regions within the Poly methyl methacrylate (PMMA) volume.

  3. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems

    PubMed Central

    Jung, Jaehoon; Kim, Jeonghyun; Yoon, Sanghyun; Kim, Sangmin; Cho, Hyoungsig; Kim, Changjae; Heo, Joon

    2015-01-01

    The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system’s trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach. PMID:25946627

  4. Information Display System for Atypical Flight Phase

    NASA Technical Reports Server (NTRS)

    Statler, Irving C. (Inventor); Ferryman, Thomas A. (Inventor); Amidan, Brett G. (Inventor); Whitney, Paul D. (Inventor); White, Amanda M. (Inventor); Willse, Alan R. (Inventor); Cooley, Scott K. (Inventor); Jay, Joseph Griffith (Inventor); Lawrence, Robert E. (Inventor); Mosbrucker, Chris J. (Inventor); Rosenthal, Loren J. (Inventor); Lynch, Robert E. (Inventor); Chidester, Thomas R. (Inventor); Prothero, Gary L. (Inventor); Andrei, Adi (Inventor); Romanowski, Timothy P. (Inventor); Robin, Daniel E. (Inventor); Prothero, Jason W. (Inventor)

    2007-01-01

    Method and system for displaying information on one or more aircraft flights, where at least one flight is determined to have at least one atypical flight phase according to specified criteria. A flight parameter trace for an atypical phase is displayed and compared graphically with a group of traces, for the corresponding flight phase and corresponding flight parameter, for flights that do not manifest atypicality in that phase.

  5. Beacon data acquisition and display system

    DOEpatents

    Skogmo, D.G.; Black, B.D.

    1991-12-17

    A system for transmitting aircraft beacon information received by a secondary surveillance radar through telephone lines to a remote display includes a digitizer connected to the radar for preparing a serial file of data records containing position and identification information of the beacons detected by each sweep of the radar. This information is transmitted through the telephone lines to a remote computer where it is displayed. 6 figures.

  6. Beacon data acquisition and display system

    DOEpatents

    Skogmo, David G.; Black, Billy D.

    1991-01-01

    A system for transmitting aircraft beacon information received by a secondary surveillance radar through telephone lines to a remote display includes a digitizer connected to the radar for preparing a serial file of data records containing position and identification information of the beacons detected by each sweep of the radar. This information is transmitted through the telephone lines to a remote computer where it is displayed.

  7. Electronic data generation and display system

    NASA Technical Reports Server (NTRS)

    Wetekamm, Jules

    1988-01-01

    The Electronic Data Generation and Display System (EDGADS) is a field tested paperless technical manual system. The authoring provides subject matter experts the option of developing procedureware from digital or hardcopy inputs of technical information from text, graphics, pictures, and recorded media (video, audio, etc.). The display system provides multi-window presentations of graphics, pictures, animations, and action sequences with text and audio overlays on high resolution color CRT and monochrome portable displays. The database management system allows direct access via hierarchical menus, keyword name, ID number, voice command or touch of a screen pictoral of the item (ICON). It contains operations and maintenance technical information at three levels of intelligence for a total system.

  8. A Watercolor NPR System with Web-Mining 3D Color Charts

    NASA Astrophysics Data System (ADS)

    Chen, Lieu-Hen; Ho, Yi-Hsin; Liu, Ting-Yu; Hsieh, Wen-Chieh

    In this paper, we propose a watercolor image synthesizing system which integrates the user-personalized color charts based on web-mining technologies with the 3D Watercolor NPR system. Through our system, users can personalize their own color palette by using keywords such as the name of the artist or by choosing color sets on an emotional map. The related images are searched from web by adopting web mining technology, and the appropriate colors are extracted to construct the color chart by analyzing these images. Then, the color chart is rendered in a 3D visualization system which allows users to view and manage the distribution of colors interactively. Then, users can use these colors on our watercolor NPR system with a sketch-based GUI which allows users to manipulate watercolor attributes of object intuitively and directly.

  9. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  10. Error analysis of 3D laser scanning system for gangue monitoring

    NASA Astrophysics Data System (ADS)

    Hu, Shaoxing; Xia, Yuyang; Zhang, Aiwu

    2012-01-01

    The paper put forward the system error evaluation method of 3D scanning system for gangue monitoring; analyzed system errors including integrated error which can be avoided, and measurement error which needed whole analysis; firstly established the system equation after understanding the relationship of each structure. Then, used error independent effect and spread law to set up the entire error analysis system, and simulated the trend of error changing along X, Y, Z directions. At last, it is analytic that the laser rangefinder carries some weight in system error, and the horizontal and vertical scanning angles have some influences on system error in the certain vertical and horizontal scanning parameters.

  11. Evolution Of Map Display Optical Systems

    NASA Astrophysics Data System (ADS)

    Boot, Alan

    1983-06-01

    It is now over 20 years since Ferranti plc introduced optically projected map displays into operational aircraft navigation systems. Then, as now, it was the function of the display to present an image of a topographical map to a pilot or navigator with his present position clearly identified. Then, as now, the map image was projected from a reduced image stored on colour micro film. Then, as now, the fundamental design problems are the same.In the exposed environment of an aircraft cockpit where brightness levels may vary from those associated with direct sunlight on the one hand, to starlight on the other, how does one design an optical system with sufficient luminance, contrast and resolution where in the daytime sunlight may fall on the display or in the pilot's eyes, and at night time the display luminance must not detract from the pilot's ability to pick up external clues? This paper traces the development of Ferranti plc optically projected map displays from the early V Bomber and the ill-fated TSR2 displays to the Harrier and Concorde displays. It then goes on to the development of combined map and electronic displays (COMED), showing how an earlier design, as fitted to Tornado, has been developed into the current COMED design which is fitted to the F-18 and Jaguar aircraft. In each of the above display systems particular features of optical design interest are identified and their impact on the design as a whole are discussed. The use of prisms both for optical rotation and translation, techniques for the maximisation of luminance, the problems associated with contrast enhancement, particularly with polarising filters in the presence of optically active materials, the use of aerial image combining systems and the impact of the pilot interface on the system parameter are all included.Perhaps the most interesting result in considering the evolution of map displays has not been so much the designer's solutions in overcoming the various design problems but

  12. Digital In-Line Holography System for 3D-3C Particle Tracking Velocimetry

    NASA Astrophysics Data System (ADS)

    Malek, Mokrane; Lebrun, Denis; Allano, Daniel

    Digital in-line holography is a suitable method for measuring three dimensional (3D) velocity fields. Such a system records directly on a charge-coupled device (CCD) camera a couple of diffraction patterns produced by small particles illuminated by a modulated laser diode. The numerical reconstruction is based on the wavelet transformation method. A 3D particle field is reconstructed by computing the wavelet components for different scale parameters. The scale parameter is directly related to the axial distance between a given particle and the CCD camera. The particle images are identified and localized by analyzing the maximum of the wavelet transform modulus (WTMM) and the equivalent diameter of the particle image (Deq). Afterwards, a 3D point-matching (PM) algorithm is applied to the pair of sets containing the 3D particle locations. In the PM algorithm, the displacement of the particles is modeled by an affine transformation. This affine transformation is based on the use of the dual number quaternions. Afterwards, the velocity-field extraction is performed. This system is tested with simulated particle field displacements and the feasibility is checked with an experimental displacement.

  13. Image-based indoor localization system based on 3D SfM model

    NASA Astrophysics Data System (ADS)

    Lu, Guoyu; Kambhamettu, Chandra

    2013-12-01

    Indoor localization is an important research topic for both of the robot and signal processing communities. In recent years, image-based localization is also employed in indoor environment for the easy availability of the necessary equipment. After capturing an image and sending it to an image database, the best matching image is returned with the navigation information. By allowing further camera pose estimation, the image-based localization system with the use of Structure-from-Motion reconstruction model can achieve higher accuracy than the methods of searching through a 2D image database. However, this emerging technique is still only on the use of outdoor environment. In this paper, we introduce the 3D SfM model based image-based localization system into the indoor localization task. We capture images of the indoor environment and reconstruct the 3D model. On the localization task, we simply use the images captured by a mobile to match the 3D reconstructed model to localize the image. In this process, we use the visual words and the approximate nearest neighbor methods to accelerate the process of nding the query feature's correspondences. Within the visual words, we conduct linear search in detecting the correspondences. From the experiments, we nd that the image-based localization method based on 3D SfM model gives good localization result based on both accuracy and speed.

  14. CELSS-3D: a broad computer model simulating a controlled ecological life support system.

    PubMed

    Schneegurt, M A; Sherman, L A

    1997-01-01

    CELSS-3D is a dynamic, deterministic, and discrete computer simulation of a controlled ecological life support system (CELSS) focusing on biological issues. A series of linear difference equations within a graphic-based modeling environment, the IThink program, was used to describe a modular CELSS system. The overall model included submodels for crop growth chambers, food storage reservoirs, the human crew, a cyanobacterial growth chamber, a waste processor, fixed nitrogen reservoirs, and the atmospheric gases, CO, O2, and N2. The primary process variable was carbon, although oxygen and nitrogen flows were also modeled. Most of the input data used in CELSS-3D were from published sources. A separate linear optimization program, What'sBest!, was used to compare options for the crew's vegetarian diet. CELSS-3D simulations were run for the equivalent of 3 years with a 1-h time interval. Output from simulations run under nominal conditions was used to illustrate dynamic changes in the concentrations of atmospheric gases. The modular design of CELSS-3D will allow other configurations and various failure scenarios to be tested and compared. PMID:11540449

  15. Automatic 3D power line reconstruction of multi-angular imaging power line inspection system

    NASA Astrophysics Data System (ADS)

    Zhang, Wuming; Yan, Guangjian; Wang, Ning; Li, Qiaozhi; Zhao, Wei

    2007-06-01

    We develop a multi-angular imaging power line inspection system. Its main objective is to monitor the relative distance between high voltage power line and around objects, and alert if the warning threshold is exceeded. Our multi-angular imaging power line inspection system generates DSM of the power line passage, which comprises ground surface and ground objects, for example trees and houses, etc. For the purpose of revealing the dangerous regions, where ground objects are too close to the power line, 3D power line information should be extracted at the same time. In order to improve the automation level of extraction, reduce labour costs and human errors, an automatic 3D power line reconstruction method is proposed and implemented. It can be achieved by using epipolar constraint and prior knowledge of pole tower's height. After that, the proper 3D power line information can be obtained by space intersection using found homologous projections. The flight experiment result shows that the proposed method can successfully reconstruct 3D power line, and the measurement accuracy of the relative distance satisfies the user requirement of 0.5m.

  16. Prediction of parallel NIKE3D performance on the KSR1 system

    SciTech Connect

    Su, P.S.; Zacharia, T.; Fulton, R.E.

    1995-05-01

    Finite element method is one of the bases for numerical solutions to engineering problems. Complex engineering problems using finite element analysis typically imply excessively large computational time. Parallel supercomputers have the potential for significantly increasing calculation speeds in order to meet these computational requirements. This paper predicts parallel NIKE3D performance on the Kendall Square Research (KSR1) system. The first part of the prediction is based on the implementation of parallel Cholesky (U{sup T}DU) matrix decomposition algorithm through actual computations on the KSRI multiprocessor system, with 64 processors, at Oak Ridge National Laboratory. The other predictions are based on actual computations for parallel element matrix generation, parallel global stiffness matrix assembly, and parallel forward/backward substitution on the BBN TC2000 multiprocessor system at Lawrence Livermore National Laboratory. The preliminary results indicate that parallel NIKE3D performance can be attractive under local/shared-memory multiprocessor system environments.

  17. Schematic displays for the Space Shuttle Orbiter multifunction cathode-ray-tube display system

    NASA Technical Reports Server (NTRS)

    Weiss, W.

    1979-01-01

    A standardized procedure for developing cathode ray tube displayed schematic diagrams. The displaying of Spacelab information on the space shuttle orbiter multifunction cathode ray tube display system is used to illustrate this procedure. Schematic displays with the equivalent tabular displays are compared.

  18. Micro-precise spatiotemporal delivery system embedded in 3D printing for complex tissue regeneration.

    PubMed

    Tarafder, Solaiman; Koch, Alia; Jun, Yena; Chou, Conrad; Awadallah, Mary R; Lee, Chang H

    2016-06-01

    Three dimensional (3D) printing has emerged as an efficient tool for tissue engineering and regenerative medicine, given its advantages for constructing custom-designed scaffolds with tunable microstructure/physical properties. Here we developed a micro-precise spatiotemporal delivery system embedded in 3D printed scaffolds. PLGA microspheres (μS) were encapsulated with growth factors (GFs) and then embedded inside PCL microfibers that constitute custom-designed 3D scaffolds. Given the substantial difference in the melting points between PLGA and PCL and their low heat conductivity, μS were able to maintain its original structure while protecting GF's bioactivities. Micro-precise spatial control of multiple GFs was achieved by interchanging dispensing cartridges during a single printing process. Spatially controlled delivery of GFs, with a prolonged release, guided formation of multi-tissue interfaces from bone marrow derived mesenchymal stem/progenitor cells (MSCs). To investigate efficacy of the micro-precise delivery system embedded in 3D printed scaffold, temporomandibular joint (TMJ) disc scaffolds were fabricated with micro-precise spatiotemporal delivery of CTGF and TGFβ3, mimicking native-like multiphase fibrocartilage. In vitro, TMJ disc scaffolds spatially embedded with CTGF/TGFβ3-μS resulted in formation of multiphase fibrocartilaginous tissues from MSCs. In vivo, TMJ disc perforation was performed in rabbits, followed by implantation of CTGF/TGFβ3-μS-embedded scaffolds. After 4 wks, CTGF/TGFβ3-μS embedded scaffolds significantly improved healing of the perforated TMJ disc as compared to the degenerated TMJ disc in the control group with scaffold embedded with empty μS. In addition, CTGF/TGFβ3-μS embedded scaffolds significantly prevented arthritic changes on TMJ condyles. In conclusion, our micro-precise spatiotemporal delivery system embedded in 3D printing may serve as an efficient tool to regenerate complex and inhomogeneous tissues. PMID

  19. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA. PMID:21181572

  20. Performance specification for control tower display systems

    NASA Astrophysics Data System (ADS)

    Aleva, Denise L.; Meyer, Frederick M.

    2003-09-01

    Personnel in airport control towers monitor and direct the takeoff of outgoing aircraft, landing of incoming aircraft and all movements of aircraft on the ground. Although the primary source of information for the Local Controller, Assistant Local Controller and the Ground Controller is the real world viewed through the windows of the control tower, electronic displays are also used to provide situation awareness. Due to the criticality of the work to be performed by the controllers and the rather unique environment of the air traffic control tower, display hardware standards, which have been developed for general use, are not directly applicable. The Federal Aviation Administration (FAA) requested assistance of Air Force Research Laboratory Human Effectiveness Directorate in producing a document which can be adopted as a Tower Display Standard usable by display engineers, human factors practitioners and system integrators. Particular emphasis was placed on human factors issues applicable to the control tower environment and controller task demands.

  1. On the critical one-component velocity regularity criteria to 3-D incompressible MHD system

    NASA Astrophysics Data System (ADS)

    Liu, Yanlin

    2016-05-01

    Let (u , b) be a smooth enough solution of 3-D incompressible MHD system. We prove that if (u , b) blows up at a finite time T*, then for any p ∈ ] 4 , ∞ [, there holds ∫0T* (‖u3(t‧) ‖ H ˙ 1/2 +2/p p + ‖b(t‧) ‖ H ˙ 1/2 +2/p p) dt‧ = ∞. We remark that all these quantities are in the critical regularity of the MHD system.

  2. The design of 3D optical system for multidirectional phase tomography

    NASA Astrophysics Data System (ADS)

    Antoš, Martin

    2008-12-01

    The design of 3D optical system for multidirectional phase tomograph is presented in detail. The suggested tomograph uses a multidirectional holographic interferometer with diffusive light. The method of dividing of the laser-beam to object and reference beams is described. The optimisation of geometrical dimensions of the testing area and optical parameters of projection beams was done in order to increase the number of obtainable angular projections. Finally, projecting properties of the scanning system of the tomograph are presented.

  3. A Photo-Realistic 3-D Mapping System for Extreme Nuclear Environments: Chornobyl

    NASA Technical Reports Server (NTRS)

    Maimone, M.; Matthies, L.; Osborn, J.; Teza, J.; Thayer, S.

    1998-01-01

    We present a novel stereoscopic mapping system for use in nuclear accident settings. First we discuss a radiation shielded sensor array desigtned to tolerate 10(sup 6)R of cumulative dose. Next we give procedures to ensure timely, accurate range estimation using trinocular stereo. Finally, we review the implementation of a system for the integration of range information into a 3-D, textured, metrically accurate surface mesh.

  4. Development and characterization of 3D-printed feed spacers for spiral wound membrane systems.

    PubMed

    Siddiqui, Amber; Farhat, Nadia; Bucs, Szilárd S; Linares, Rodrigo Valladares; Picioreanu, Cristian; Kruithof, Joop C; van Loosdrecht, Mark C M; Kidwell, James; Vrouwenvelder, Johannes S

    2016-03-15

    Feed spacers are important for the impact of biofouling on the performance of spiral-wound reverse osmosis (RO) and nanofiltration (NF) membrane systems. The objective of this study was to propose a strategy for developing, characterizing, and testing of feed spacers by numerical modeling, three-dimensional (3D) printing of feed spacers and experimental membrane fouling simulator (MFS) studies. The results of numerical modeling on the hydrodynamic behavior of various feed spacer geometries suggested that the impact of spacers on hydrodynamics and biofouling can be improved. A good agreement was found for the modeled and measured relationship between linear flow velocity and pressure drop for feed spacers with the same geometry, indicating that modeling can serve as the first step in spacer characterization. An experimental comparison study of a feed spacer currently applied in practice and a 3D printed feed spacer with the same geometry showed (i) similar hydrodynamic behavior, (ii) similar pressure drop development with time and (iii) similar biomass accumulation during MFS biofouling studies, indicating that 3D printing technology is an alternative strategy for development of thin feed spacers with a complex geometry. Based on the numerical modeling results, a modified feed spacer with low pressure drop was selected for 3D printing. The comparison study of the feed spacer from practice and the modified geometry 3D printed feed spacer established that the 3D printed spacer had (i) a lower pressure drop during hydrodynamic testing, (ii) a lower pressure drop increase in time with the same accumulated biomass amount, indicating that modifying feed spacer geometries can reduce the impact of accumulated biomass on membrane performance. The combination of numerical modeling of feed spacers and experimental testing of 3D printed feed spacers is a promising strategy (rapid, low cost and representative) to develop advanced feed spacers aiming to reduce the impact of

  5. Integrated Avionics System (IAS), Integrating 3-D Technology On A Spacecraft Panel

    NASA Technical Reports Server (NTRS)

    Hunter, Don J.; Halpert, Gerald

    1999-01-01

    As spacecraft designs converge toward miniaturization, and with the volumetric and mass challenges placed on avionics, programs will continue to advance the "state of the art" in spacecraft system development with new challenges to reduce power, mass and volume. Traditionally, the trend is to focus on high-density 3-D packaging technologies. Industry has made significant progress in 3-D technologies, and other related internal and external interconnection schemes. Although new technologies have improved packaging densities, a system packaging architecture is required that not only reduces spacecraft volume and mass budgets, but increase integration efficiencies, provide modularity and flexibility to accommodate multiple missions while maintaining a low recurring cost. With these challenges in mind, a novel system packaging approach incorporates solutions that provide broader environmental applications, more flexible system interconnectivity, scalability, and simplified assembly test and integration schemes. The Integrated Avionics System (IAS) provides for a low-mass, modular distributed or centralized packaging architecture which combines ridged-flex technologies, high-density COTS hardware and a new 3-D mechanical packaging approach, Horizontal Mounted Cube (HMC). This paper will describe the fundamental elements of the IAS, HMC hardware design, system integration and environmental test results.

  6. Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code System.

    2013-06-24

    Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check author’s homepage for related information: http

  7. Image enhancement system for mobile displays

    NASA Astrophysics Data System (ADS)

    Parkkinen, Jaana; Nenonen, Petri

    2005-02-01

    In this paper, we present a system for enhancing digital photography on mobile displays. The system is using adaptive filtering and display specific methods for maximizing the subjective quality of images. Because mobile platforms have a limited amount of memory and processing power, we describe computationally efficient scaling and enhancement algorithms that are especially suitable for mobile devices and displays. We also show how a proper arrangement of these algorithms forms an image processing chain that is optimized for mobile use. The developed image enhancement system has been implemented using the Nokia Series60 platform and tested on imaging phones. Tests and results show that significant improvement of quality can be achieved with this solution within the processing power and memory limitations that mobile platforms set.

  8. Projection-type dual-view three-dimensional display system based on integral imaging.

    PubMed

    Jeong, Jinsoo; Lee, Chang-Kun; Hong, Keehoon; Yeom, Jiwoon; Lee, Byoungho

    2014-09-20

    A dual-view display system provides two different images in different directions. Most of them only present two-dimensional images for observers. In this paper, we propose a projection-type dual-view three-dimensional (3D) display system based on integral imaging. To assign directivities to the images, a projection-type display and dual-view screen with lenticular lenses are implemented. The lenticular lenses split the collimated image from the projection device into two different directions. The separated images are integrated by a single lens array in front of the screen, and full-parallax 3D images are observed in two different viewing regions. The visibility of the reconstructed 3D images can be improved by using high-density lenticular lenses and a high numerical aperture lens array. We explain the principle of the proposed method and verify the feasibility of the proposed system with simulations and experimental results. PMID:25322119

  9. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  10. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views. PMID:27166019

  11. Seismic performance of a novel 3D isolation system on continuous bridges

    NASA Astrophysics Data System (ADS)

    Ou, J. P.; Jia, J. F.

    2010-04-01

    Remarkable vertical seismic motion is one of the prominent characteristics of the near-fault earthquake motions, but the traditional and widely used base isolation system only can effectively mitigate horizontal seismic responses and structural damage. A promising three-dimensional (3D) seismic isolation bearing, consisting of laminated rubber bearing with lead core (LRB) and combined coned disc spring with vertical energy dissipation device (e.g., inner fluid viscous cylindric damper or steel damper), was proposed to mitigate horizontal and vertical structural seismic responses simultaneously and separately. Three-group seismic ground motion records were selected to validate the effectiveness of the proposed 3D seismic isolation bearing on a continuous slab bridge. The appropriate damping of the vertical damping device was presented by parametric study. The analyses results showed that the proposed 3D isolation bearing is essentially effective to mitigate vertical and horizontal structural seismic response simultaneously. Near-fault pulse-type seismic motions should be considered in seismic isolation design and evaluation. The proper damping ratio of the vertical damping device should be 20%-30% for favorable vertical isolation effectiveness. The proposed 3D seismic isolation bearing is promising to be applied to the mediate-to-short span bridge and even some building structures.

  12. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  13. A 3D human neural cell culture system for modeling Alzheimer’s disease

    PubMed Central

    Kim, Young Hye; Choi, Se Hoon; D’Avanzo, Carla; Hebisch, Matthias; Sliwinski, Christopher; Bylykbashi, Enjana; Washicosky, Kevin J.; Klee, Justin B.; Brüstle, Oliver; Tanzi, Rudolph E.; Kim, Doo Yeon

    2015-01-01

    Stem cell technologies have facilitated the development of human cellular disease models that can be used to study pathogenesis and test therapeutic candidates. These models hold promise for complex neurological diseases such as Alzheimer’s disease (AD) because existing animal models have been unable to fully recapitulate all aspects of pathology. We recently reported the characterization of a novel three-dimensional (3D) culture system that exhibits key events in AD pathogenesis, including extracellular aggregation of β-amyloid and accumulation of hyperphosphorylated tau. Here we provide instructions for the generation and analysis of 3D human neural cell cultures, including the production of genetically modified human neural progenitor cells (hNPCs) with familial AD mutations, the differentiation of the hNPCs in a 3D matrix, and the analysis of AD pathogenesis. The 3D culture generation takes 1–2 days. The aggregation of β-amyloid is observed after 6-weeks of differentiation followed by robust tau pathology after 10–14 weeks. PMID:26068894

  14. A 3D human neural cell culture system for modeling Alzheimer's disease.

    PubMed

    Kim, Young Hye; Choi, Se Hoon; D'Avanzo, Carla; Hebisch, Matthias; Sliwinski, Christopher; Bylykbashi, Enjana; Washicosky, Kevin J; Klee, Justin B; Brüstle, Oliver; Tanzi, Rudolph E; Kim, Doo Yeon

    2015-07-01

    Stem cell technologies have facilitated the development of human cellular disease models that can be used to study pathogenesis and test therapeutic candidates. These models hold promise for complex neurological diseases such as Alzheimer's disease (AD), because existing animal models have been unable to fully recapitulate all aspects of pathology. We recently reported the characterization of a novel 3D culture system that exhibits key events in AD pathogenesis, including extracellular aggregation of amyloid-β (Aβ) and accumulation of hyperphosphorylated tau. Here we provide instructions for the generation and analysis of 3D human neural cell cultures, including the production of genetically modified human neural progenitor cells (hNPCs) with familial AD mutations, the differentiation of the hNPCs in a 3D matrix and the analysis of AD pathogenesis. The 3D culture generation takes 1-2 d. The aggregation of Aβ is observed after 6 weeks of differentiation, followed by robust tau pathology after 10-14 weeks. PMID:26068894

  15. First 3D reconstruction of the rhizocephalan root system using MicroCT

    NASA Astrophysics Data System (ADS)

    Noever, Christoph; Keiler, Jonas; Glenner, Henrik

    2016-07-01

    Parasitic barnacles (Cirripedia: Rhizocephala) are highly specialized parasites of crustaceans. Instead of an alimentary tract for feeding they utilize a system of roots, which infiltrates the body of their hosts to absorb nutrients. Using X-ray micro computer tomography (MicroCT) and computer-aided 3D-reconstruction, we document the spatial organization of this root system, the interna, inside the intact host and also demonstrate its use for morphological examinations of the parasites reproductive part, the externa. This is the first 3D visualization of the unique root system of the Rhizocephala in situ, showing how it is related to the inner organs of the host. We investigated the interna from different parasitic barnacles of the family Peltogastridae, which are parasitic on anomuran crustaceans. Rhizocephalan parasites of pagurid hermit crabs and lithodid crabs were analysed in this study.

  16. Development of a 3D Digital Particle Image Thermometry and Velocimetry (3DDPITV) System

    NASA Astrophysics Data System (ADS)

    Schmitt, David; Rixon, Greg; Dabiri, Dana

    2006-11-01

    A novel 3D Digital Particle Image Thermometry and Velocimetry (3DDPITV) system has been designed and fabricated. By combining 3D Digital Particle Image Velocimetry (3DDPIV) and Digital Particle Image Thermometry (DPIT) into one system, this technique provides simultaneous temperature and velocity data in a volume of ˜1x1x0.5 in^3 using temperature sensitive liquid crystal particles as flow sensors. Two high-intensity xenon flashlamps were used as illumination sources. The imaging system consists of six CCD cameras, three allocated for measuring velocity, based on particle motion, and three for measuring temperature, based on particle color. The cameras were optically aligned using a precision grid and high-resolution translation stages. Temperature calibration was then performed using a precision thermometer and a temperature-controlled bath. Results from proof-of-concept experiments will be presented and discussed.

  17. Photon counting x-ray CT with 3D holograms by CdTe line sensor

    NASA Astrophysics Data System (ADS)

    Koike, A.; Yomori, M.; Morii, H.; Neo, Y.; Aoki, T.; Mimura, H.

    2008-08-01

    The novel 3-D display system is required in the medical treatment field and non-destructive testing field. In these field, the X-ray CT system is used for obtaining 3-D information. However, there are no meaningful 3-D information in X-ray CT data, and there are also no practical 3-D display system. Therefore, in this paper, we propose an X-ray 3-D CT display system by combining a photon-counting X-ray CT system and a holographic image display system. The advantage of this system was demonstrated by comparing the holographic calculation time and recognizability of a reconstructed image.

  18. 3D Image Acquisition System Based on Shape from Focus Technique

    PubMed Central

    Billiot, Bastien; Cointault, Frédéric; Journaux, Ludovic; Simon, Jean-Claude; Gouton, Pierre

    2013-01-01

    This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene. PMID:23591964

  19. Development of Mobile Mapping System for 3D Road Asset Inventory.

    PubMed

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  20. Reconstructing 3-D skin surface motion for the DIET breast cancer screening system.

    PubMed

    Botterill, Tom; Lotz, Thomas; Kashif, Amer; Chase, J Geoffrey

    2014-05-01

    Digital image-based elasto-tomography (DIET) is a prototype system for breast cancer screening. A breast is imaged while being vibrated, and the observed surface motion is used to infer the internal stiffness of the breast, hence identifying tumors. This paper describes a computer vision system for accurately measuring 3-D surface motion. A model-based segmentation is used to identify the profile of the breast in each image, and the 3-D surface is reconstructed by fitting a model to the profiles. The surface motion is measured using a modern optical flow implementation customized to the application, then trajectories of points on the 3-D surface are given by fusing the optical flow with the reconstructed surfaces. On data from human trials, the system is shown to exceed the performance of an earlier marker-based system at tracking skin surface motion. We demonstrate that the system can detect a 10 mm tumor in a silicone phantom breast. PMID:24770915

  1. Development of Mobile Mapping System for 3D Road Asset Inventory

    PubMed Central

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  2. Wind information display system user's manual

    NASA Technical Reports Server (NTRS)

    Roe, J.; Smith, G.

    1977-01-01

    The Wind Information Display System (WINDS) provides flexible control through system-user interaction for collecting wind shear data, processing this data in real time, displaying the processed data, storing raw data on magnetic tapes, and post-processing raw data. The data are received from two asynchronous laser Doppler velocimeters (LDV's) and include position, velocity and intensity information. The raw data is written onto magnetic tape for permanent storage and is also processed in real time to depict wind velocities in a given spacial region.

  3. The application of autostereoscopic display in smart home system based on mobile devices

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Ling, Zhi

    2015-03-01

    Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.

  4. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  5. A comprehensive evaluation of the PRESAGE∕optical-CT 3D dosimetry system

    PubMed Central

    Sakhalkar, H. S.; Adamovics, J.; Ibbott, G.; Oldham, M.

    2009-01-01

    This work presents extensive investigations to evaluate the robustness (intradosimeter consistency and temporal stability of response), reproducibility, precision, and accuracy of a relatively new 3D dosimetry system comprising a leuco-dye doped plastic 3D dosimeter (PRESAGE) and a commercial optical-CT scanner (OCTOPUS 5× scanner from MGS Research, Inc). Four identical PRESAGE 3D dosimeters were created such that they were compatible with the Radiologic Physics Center (RPC) head-and-neck (H&N) IMRT credentialing phantom. Each dosimeter was irradiated with a rotationally symmetric arrangement of nine identical small fields (1×3 cm2) impinging on the flat circular face of the dosimeter. A repetitious sequence of three dose levels (4, 2.88, and 1.28 Gy) was delivered. The rotationally symmetric treatment resulted in a dose distribution with high spatial variation in axial planes but only gradual variation with depth along the long axis of the dosimeter. The significance of this treatment was that it facilitated accurate film dosimetry in the axial plane, for independent verification. Also, it enabled rigorous evaluation of robustness, reproducibility and accuracy of response, at the three dose levels. The OCTOPUS 5× commercial scanner was used for dose readout from the dosimeters at daily time intervals. The use of improved optics and acquisition technique yielded substantially improved noise characteristics (reduced to ∼2%) than has been achieved previously. Intradosimeter uniformity of radiochromic response was evaluated by calculating a 3D gamma comparison between each dosimeter and axially rotated copies of the same dosimeter. This convenient technique exploits the rotational symmetry of the distribution. All points in the gamma comparison passed a 2% difference, 1 mm distance-to-agreement criteria indicating excellent intradosimeter uniformity even at low dose levels. Postirradiation, the dosimeters were all found to exhibit a slight increase in opaqueness

  6. A comprehensive evaluation of the PRESAGE/optical-CT 3D dosimetry system.

    PubMed

    Sakhalkar, H S; Adamovics, J; Ibbott, G; Oldham, M

    2009-01-01

    This work presents extensive investigations to evaluate the robustness (intradosimeter consistency and temporal stability of response), reproducibility, precision, and accuracy of a relatively new 3D dosimetry system comprising a leuco-dye doped plastic 3D dosimeter (PRESAGE) and a commercial optical-CT scanner (OCTOPUS 5x scanner from MGS Research, Inc). Four identical PRESAGE 3D dosimeters were created such that they were compatible with the Radiologic Physics Center (RPC) head-and-neck (H&N) IMRT credentialing phantom. Each dosimeter was irradiated with a rotationally symmetric arrangement of nine identical small fields (1 x 3 cm2) impinging on the flat circular face of the dosimeter. A repetitious sequence of three dose levels (4, 2.88, and 1.28 Gy) was delivered. The rotationally symmetric treatment resulted in a dose distribution with high spatial variation in axial planes but only gradual variation with depth along the long axis of the dosimeter. The significance of this treatment was that it facilitated accurate film dosimetry in the axial plane, for independent verification. Also, it enabled rigorous evaluation of robustness, reproducibility and accuracy of response, at the three dose levels. The OCTOPUS 5x commercial scanner was used for dose readout from the dosimeters at daily time intervals. The use of improved optics and acquisition technique yielded substantially improved noise characteristics (reduced to approximately 2%) than has been achieved previously. Intradosimeter uniformity of radiochromic response was evaluated by calculating a 3D gamma comparison between each dosimeter and axially rotated copies of the same dosimeter. This convenient technique exploits the rotational symmetry of the distribution. All points in the gamma comparison passed a 2% difference, 1 mm distance-to-agreement criteria indicating excellent intradosimeter uniformity even at low dose levels. Postirradiation, the dosimeters were all found to exhibit a slight increase in

  7. Comparison of different 3D navigation systems by a clinical "user".

    PubMed

    Cartellieri, M; Kremser, J; Vorbeck, F

    2001-01-01

    Three-dimensional navigation systems are routinely used in endoscopic skull base surgery, neurosurgery, maxillo-facial and endoscopic sinus surgery. Their precision can, however, change in the course of one experiment. We have compared five different 3D navigation systems and discuss here possible reasons for the limits of system precision. A plexiglass cube on which test points were marked served as a test-model. Two well-trained system users measured the distances between the test points in each of the five systems. The results were compared with reference data provided by the NUMEREX device at the Technical University of Vienna. The accuracy data shown by all these 3D navigation systems ranged from 0.0 mm to 6.67 mm. The accuracy data of a system calculated in advance did not always correspond with the system precision on the screen. The system precision in the center of the cube was higher than on its surface, which made us conclude that the angle between the tracker system and the pointing device touching the test point may be critical for system precision. Applying an automatic registration step did not result in greater system precision. Slice thickness and the angle of the pointing device seem to be responsible for system precision. PMID:11271433

  8. Advanced rotorcraft helmet display sighting system optics

    NASA Astrophysics Data System (ADS)

    Raynal, Francois; Chen, Muh-Fa

    2002-08-01

    Kaiser Electronics' Advanced Rotorcraft Helmet Display Sighting System is a Biocular Helmet Mounted Display (HMD) for Rotary Wing Aviators. Advanced Rotorcraft HMDs requires low head supported weight, low center of mass offsets, low peripheral obstructions of the visual field, large exit pupils, large eye relief, wide field of view (FOV), high resolution, low luning, sun light readability with high contrast and low prismatic deviations. Compliance with these safety, user acceptance and optical performance requirements is challenging. The optical design presented in this paper provides an excellent balance of these different and conflicting requirements. The Advanced Rotorcraft HMD optical design is a pupil forming off axis catadioptric system that incorporates a transmissive SXGA Active Matrix liquid Crystal Display (AMLCD), an LED array backlight and a diopter adjustment mechanism.

  9. Tree root systems competing for soil moisture in a 3D soil-plant model

    NASA Astrophysics Data System (ADS)

    Manoli, Gabriele; Bonetti, Sara; Domec, Jean-Christophe; Putti, Mario; Katul, Gabriel; Marani, Marco

    2014-04-01

    Competition for water among multiple tree rooting systems is investigated using a soil-plant model that accounts for soil moisture dynamics and root water uptake (RWU), whole plant transpiration, and leaf-level photosynthesis. The model is based on a numerical solution to the 3D Richards equation modified to account for a 3D RWU, trunk xylem, and stomatal conductances. The stomatal conductance is determined by combining a conventional biochemical demand formulation for photosynthesis with an optimization hypothesis that selects stomatal aperture so as to maximize carbon gain for a given water loss. Model results compare well with measurements of soil moisture throughout the rooting zone, of total sap flow in the trunk xylem, as well as of leaf water potential collected in a Loblolly pine forest. The model is then used to diagnose plant responses to water stress in the presence of competing rooting systems. Unsurprisingly, the overlap between rooting zones is shown to enhance soil drying. However, the 3D spatial model yielded transpiration-bulk root-zone soil moisture relations that do not deviate appreciably from their proto-typical form commonly assumed in lumped eco-hydrological models. The increased overlap among rooting systems primarily alters the timing at which the point of incipient soil moisture stress is reached by the entire soil-plant system.

  10. A 3D visualization and guidance system for handheld optical imaging devices

    NASA Astrophysics Data System (ADS)

    Azar, Fred S.; de Roquemaurel, Benoit; Cerussi, Albert; Hajjioui, Nassim; Li, Ang; Tromberg, Bruce J.; Sauer, Frank

    2007-03-01

    We have developed a novel 3D visualization and guidance system for handheld optical imaging devices. In this paper, the system is applied to measurements of breast/cancerous tissue optical properties using a handheld diffuse optical spectroscopy (DOS) instrument. The combined guidance system/DOS instrument becomes particularly useful for monitoring neoadjuvant chemotherapy in breast cancer patients and for longitudinal studies where measurement reproducibility is critical. The system uses relatively inexpensive hardware components and comprises a 6 degrees-of-freedom (DOF) magnetic tracking device including a DC field generator, three sensors, and a PCI card running on a PC workstation. A custom-built virtual environment combined with a well-defined workflow provide the means for image-guided measurements, improved longitudinal studies of breast optical properties, 3D reconstruction of optical properties within the anatomical map, and serial data registration. The DOS instrument characterizes tissue function such as water, lipid and total hemoglobin concentration. The patient lies on her back at a 45-degrees angle. Each spectral measurement requires consistent contact with the skin, and lasts about 5-10 seconds. Therefore a limited number of positions may be studied. In a reference measurement session, the physician acquires surface points on the breast. A Delaunay-based triangulation algorithm is used to build the virtual breast surface from the acquired points. 3D locations of all DOS measurements are recorded. All subsequently acquired surfaces are automatically registered to the reference surface, thus allowing measurement reproducibility through image guidance using the reference measurements.

  11. An innovative system for 3D clinical photography in the resource-limited settings

    PubMed Central

    2014-01-01

    Background Kaposi’s sarcoma (KS) is the most frequently occurring cancer in Mozambique among men and the second most frequently occurring cancer among women. Effective therapeutic treatments for KS are poorly understood in this area. There is an unmet need to develop a simple but accurate tool for improved monitoring and diagnosis in a resource-limited setting. Standardized clinical photographs have been considered to be an essential part of the evaluation. Methods When a therapeutic response is achieved, nodular KS often exhibits a reduction of the thickness without a change in the base area of the lesion. To evaluate the vertical space along with other characters of a KS lesion, we have created an innovative imaging system with a consumer light-field camera attached to a miniature “photography studio” adaptor. The image file can be further processed by computational methods for quantification. Results With this novel imaging system, each high-quality 3D image was consistently obtained with a single camera shot at bedside by minimally trained personnel. After computational processing, all-focused photos and measurable 3D parameters were obtained. More than 80 KS image sets were processed in a semi-automated fashion. Conclusions In this proof-of-concept study, the feasibility to use a simple, low-cost and user-friendly system has been established for future clinical study to monitor KS therapeutic response. This 3D imaging system can be also applied to obtain standardized clinical photographs for other diseases. PMID:24929434

  12. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  13. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects. PMID:19505502

  14. Development and application of 3-D foot-shape measurement system under different loads

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-03-01

    The 3-D foot-shape measurement system under different loads based on laser-line-scanning principle was designed and the model of the measurement system was developed. 3-D foot-shape measurements without blind areas under different loads and the automatic extraction of foot-parameter are achieved with the system. A global calibration method for CCD cameras using a one-axis motion unit in the measurement system and the specialized calibration kits is presented. Errors caused by the nonlinearity of CCD cameras and other devices and caused by the installation of the one axis motion platform, the laser plane and the toughened glass plane can be eliminated by using the nonlinear coordinate mapping function and the Powell optimized method in calibration. Foot measurements under different loads for 170 participants were conducted and the statistic foot parameter measurement results for male and female participants under non-weight condition and changes of foot parameters under half-body-weight condition, full-body-weight condition and over-body-weight condition compared with non-weight condition are presented. 3-D foot-shape measurement under different loads makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers and athletes.

  15. 3D real-time measurement system of seam with laser

    NASA Astrophysics Data System (ADS)

    Huang, Min-shuang; Huang, Jun-fen

    2014-02-01

    3-D Real-time Measurement System of seam outline based on Moiré Projection is proposed and designed. The system is composed of LD, grating, CCD, video A/D, FPGA, DSP and an output interface. The principle and hardware makeup of high-speed and real-time image processing circuit based on a Digital Signal Processor (DSP) and a Field Programmable Gate Array (FPGA) are introduced. Noise generation mechanism in poor welding field conditions is analyzed when Moiré stripes are projected on a welding workpiece surface. Median filter is adopted to smooth the acquired original laser image of seam, and then measurement results of a 3-D outline image of weld groove are provided.

  16. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  17. Implementation of parallel matrix decomposition for NIKE3D on the KSR1 system

    SciTech Connect

    Su, Philip S.; Fulton, R.E.; Zacharia, T.

    1995-06-01

    New massively parallel computer architecture has revolutionized the design of computer algorithms and promises to have significant influence on algorithms for engineering computations. Realistic engineering problems using finite element analysis typically imply excessively large computational requirements. Parallel supercomputers that have the potential for significantly increasing calculation speeds can meet these computational requirements. This report explores the potential for the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm on NIKE3D through actual computations. The examples of two- and three-dimensional nonlinear dynamic finite element problems are presented on the Kendall Square Research (KSR1) multiprocessor system, with 64 processors, at Oak Ridge National Laboratory. The numerical results indicate that the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm is attractive for NIKE3D under multi-processor system environments.

  18. An efficient solid modeling system based on a hand-held 3D laser scan device

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.

  19. Development of hybrid 3-D hydrological modeling for the NCAR Community Earth System Model (CESM)

    SciTech Connect

    Zeng, Xubin; Troch, Peter; Pelletier, Jon; Niu, Guo-Yue; Gochis, David

    2015-11-15

    This is the Final Report of our four-year (3-year plus one-year no cost extension) collaborative project between the University of Arizona (UA) and the National Center for Atmospheric Research (NCAR). The overall objective of our project is to develop and evaluate the first hybrid 3-D hydrological model with a horizontal grid spacing of 1 km for the NCAR Community Earth System Model (CESM).

  20. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level