Science.gov

Sample records for 3d display based

  1. Future of photorefractive based holographic 3D display

    NASA Astrophysics Data System (ADS)

    Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.

    2010-02-01

    The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.

  2. Special subpixel arrangement-based 3D display with high horizontal resolution.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wu, Fei

    2014-11-01

    A special subpixel arrangement-based 3D display is proposed. This display consists of a 2D display panel and a parallax barrier. On the 2D display panel, subpixels have a special arrangement, so they can redefine the formation of color pixels. This subpixel arrangement can bring about triple horizontal resolution for a conventional 2D display panel. Therefore, when these pixels are modulated by the parallax barrier, the 3D images formed also have triple horizontal resolution. A prototype of this display is developed. Experimental results show that this display with triple horizontal resolution can produce a better display effect than the conventional one. PMID:25402897

  3. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images. PMID:27410090

  4. Front and rear projection autostereoscopic 3D displays based on lenticular sheets

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Zang, Shang-Fei; Qi, Lin

    2015-03-01

    A front projection autostereoscopic display is proposed. The display is composed of eight projectors and a 3D-imageguided screen which having a lenticular sheet and a retro-reflective diffusion screen. Based on the optical multiplexing and de-multiplexing, the optical functions of the 3D-image-guided screen are parallax image interlacing and viewseparating, which is capable of reconstructing 3D images without quality degradation from the front direction. The operating principle, optical design calculation equations and correction method of parallax images are given. A prototype of the front projection autostereoscopic display is developed, which enhances the brightness and 3D perceptions, and improves space efficiency. The performance of this prototype is evaluated by measuring the luminance and crosstalk distribution along the horizontal direction at the optimum viewing distance. We also propose a rear projection autostereoscopic display. The display consists of eight projectors, a projection screen, and two lenticular sheets. The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography. A prototype of the rear projection autostereoscopic display is developed. The normalized luminance distributions of viewing zones from the measurement are given. Results agree well with the designed values. The prototype presents high resolution and high brightness 3D images. The research has potential applications in some commercial entertainments and movies for the realistic 3D perceptions.

  5. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  6. Analysis of optical characteristics of photopolymer-based VHOE for multiview autostereoscopic 3D display system

    NASA Astrophysics Data System (ADS)

    Cho, Byung-Chul; Gu, Jung-Sik; Kim, Eun-Soo

    2002-06-01

    Generally, an autostereoscopic display presents a 3D image to a viewer without the need for glasses or other encumbering viewing aids. In this paper, we propose a new autostereoscopic 3D video display system which allows viewers to observe 3D images in the same range of viewing angle. In this system, a photopolymer-based VHOE is made from volume holographic recording materials and it is used for projecting a multiview images to the spatially different directions sequentially in time. Since this technique is based on the VHOE made from the photorefractive photopolymer instead of the conventional parallax barrier or lenticular sheet, the resolution and parallax number of the proposed VHOE-based 3D display system are limited by the photopolymer's physical and optical properties. To make the photopolymer to be applicable for a multiview autostereoscopic 3D display system, the photopolymer must be capable of achieving some properties such as a low distortion of the diffracted light beam, high diffraction efficiency, and uniform intensities of the reconstructed diffracted lights from the fully recorded diffraction gratings. In this paper, the optical and physical characteristics of the DuPont HRF photopolymer-based VHOE such as a distortion of displayed image, uniformity of the diffracted light intensity, photosensitivity and diffraction efficiency are measured and discussed.

  7. Standardization based on human factors for 3D display: performance characteristics and measurement methods

    NASA Astrophysics Data System (ADS)

    Uehara, Shin-ichi; Ujike, Hiroyasu; Hamagishi, Goro; Taira, Kazuki; Koike, Takafumi; Kato, Chiaki; Nomura, Toshio; Horikoshi, Tsutomu; Mashitani, Ken; Yuuki, Akimasa; Izumi, Kuniaki; Hisatake, Yuzo; Watanabe, Naoko; Umezu, Naoaki; Nakano, Yoshihiko

    2010-02-01

    We are engaged in international standardization activities for 3D displays. We consider that for a sound development of 3D displays' market, the standards should be based on not only mechanism of 3D displays, but also human factors for stereopsis. However, we think that there is no common understanding on what the 3D display should be and that the situation makes developing the standards difficult. In this paper, to understand the mechanism and human factors, we focus on a double image, which occurs in some conditions on an autostereoscopic display. Although the double image is generally considered as an unwanted effect, we consider that whether the double image is unwanted or not depends on the situation and that there are some allowable double images. We tried to classify the double images into the unwanted and the allowable in terms of the display mechanism and visual ergonomics for stereopsis. The issues associated with the double image are closely related to performance characteristics for the autostereoscopic display. We also propose performance characteristics, measurement and analysis methods to represent interocular crosstalk and motion parallax.

  8. Four-view stereoscopic imaging and display system for web-based 3D image communication

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  9. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  10. Web-based intermediate view reconstruction for multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Kyu; Lee, Won-Kyung; Ko, Jung-Hwan; Bae, Kyung-hoon; Kim, Eun-Soo

    2005-08-01

    In this paper, web-based intermediate view reconstruction for multiview stereoscopic 3D display system is proposed by using stereo cameras and disparity maps, Intel Xeon server computer system and Microsoft's DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate and number of views. In the proposed system, stereo images are initially captured by using stereo digital cameras and then, these are processed in the Intel Xeon server computer system. And then, the captured two-view image data is compressed by extraction of disparity data between them and transmitted to another client system through the information network, in which the received stereo data is displayed on the 16-view stereoscopic 3D display system by using intermediate view reconstruction. The program for controlling the overall system is developed based on the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display 16-view 3D images with a gray of 8bits and a frame rate of 15fps in real-time.

  11. Assessment of eye fatigue caused by 3D displays based on multimodal measurements.

    PubMed

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-01-01

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315

  12. Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements

    PubMed Central

    Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung

    2014-01-01

    With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315

  13. Comprehensive evaluation of latest 2D/3D monitors and comparison to a custom-built 3D mirror-based display in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Wilhelm, Dirk; Reiser, Silvano; Kohn, Nils; Witte, Michael; Leiner, Ulrich; Mühlbach, Lothar; Ruschin, Detlef; Reiner, Wolfgang; Feussner, Hubertus

    2014-03-01

    Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation.

  14. A 360-degree floating 3D display based on light field regeneration.

    PubMed

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-01

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method. PMID:23669981

  15. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    NASA Astrophysics Data System (ADS)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  16. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc. PMID:25361316

  17. Integral 3D display using multiple LCDs

    NASA Astrophysics Data System (ADS)

    Okaichi, Naoto; Miura, Masato; Arai, Jun; Mishina, Tomoyuki

    2015-03-01

    The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.

  18. New portable FELIX 3D display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  19. FELIX: a volumetric 3D laser display

    NASA Astrophysics Data System (ADS)

    Bahr, Detlef; Langhans, Knut; Gerken, Martin; Vogt, Carsten; Bezecny, Daniel; Homann, Dennis

    1996-03-01

    In this paper, an innovative approach of a true 3D image presentation in a space filling, volumetric laser display will be described. The introduced prototype system is based on a moving target screen that sweeps the display volume. Net result is the optical equivalent of a 3D array of image points illuminated to form a model of the object which occupies a physical space. Wireframe graphics are presented within the display volume which a group of people can walk around and examine simultaneously from nearly any orientation and without any visual aids. Further to the detailed vector scanning mode, a raster scanned system and a combination of both techniques are under development. The volumetric 3D laser display technology for true reproduction of spatial images can tremendously improve the viewers ability to interpret data and to reliably determine distance, shape and orientation. Possible applications for this development range from air traffic control, where moving blips of light represent individual aircrafts in a true to scale projected airspace of an airport, to various medical applications (e.g. electrocardiography, computer-tomography), to entertainment and education visualization as well as imaging in the field of engineering and Computer Aided Design.

  20. Novel volumetric 3D display based on point light source optical reconstruction using multi focal lens array

    NASA Astrophysics Data System (ADS)

    Lee, Jin su; Lee, Mu young; Kim, Jun oh; Kim, Cheol joong; Won, Yong Hyub

    2015-03-01

    Generally, volumetric 3D display panel produce volume-filling three dimensional images. This paper discusses a volumetric 3D display based on periodical point light sources(PLSs) construction using a multi focal lens array(MFLA). The voxel of discrete 3D images is formed in the air via construction of point light source emitted by multi focal lens array. This system consists of a parallel beam, a spatial light modulator(SLM), a lens array, and a polarizing filter. The multi focal lens array is made with UV adhesive polymer droplet control using a dispersing machine. The MFLA consists of 20x20 circular lens array. Each lens aperture of the MFLA shows 300um on average. The polarizing filter is placed after the SLM and the MFLA to set in phase mostly mode. By the point spread function, the PLSs of the system are located by the focal length of each lens of the MFLA. It can also provide the moving parallax and relatively high resolution. However it has a limit of viewing angle and crosstalk by a property of each lens. In our experiment, we present the letter `C', `O', `DE' and ball's surface with the different depth location. It could be seen clearly that when CCD camera is moved to its position following as transverse axis of the display system. From our result, we expect that varifocal lens like EWOD and LC-lens can be applied for real time volumetric 3D display system.

  1. Glasses-free 3D display based on micro-nano-approach and system

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Ye, Yan; Shen, Su; Pu, Donglin; Chen, Linsen

    2014-11-01

    Micro-nano optics and digital dot matrix hologram (DDMH) technique has been combined to code and fabricate glassfree 3D image. Two kinds of true color 3D DDMH have been designed. One of the design releases the fabrication complexity and the other enlarges the view angle of 3D DDMH. Chromatic aberration has been corrected using rainbow hologram technique. A holographic printing system combined the interference and projection lithography technique has been demonstrated. Fresnel lens and large view angle 3D DDMH have been outputted, excellent color performance of 3D image has been realized.

  2. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  3. Spectroradiometric characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    Spectroradiometric measurements have been made for the experimental characterization of the RGB channels of autostereoscopic 3D displays, giving results for different measurement angles with respect to the normal direction of the plane of the display. In the study, 2 different models of autostereoscopic 3D displays of different sizes and resolutions were used, making measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). From the measurements made, goniometric results were recorded for luminance contrast, and the fundamental hypotheses have been evaluated for the characterization of the displays: independence of the RGB channels and their constancy. The results show that the display with the lower angle variability in the contrast-ratio value and constancy of the chromaticity coordinates nevertheless presented the greatest additivity deviations with the measurement angle. For both displays, when the parameters evaluated were taken into account, lower angle variability consistently resulted in the 2D mode than in the 3D mode.

  4. Colorful holographic display of 3D object based on scaled diffraction by using non-uniform fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Chang, Chenliang; Xia, Jun; Lei, Wei

    2015-03-01

    We proposed a new method to calculate the color computer generated hologram of three-dimensional object in holographic display. The three-dimensional object is composed of several tilted planes which are tilted from the hologram. The diffraction from each tilted plane to the hologram plane is calculated based on the coordinate rotation in Fourier spectrum domains. We used the nonuniform fast Fourier transformation (NUFFT) to calculate the nonuniform sampled Fourier spectrum on the tilted plane after coordinate rotation. By using the NUFFT, the diffraction calculation from tilted plane to the hologram plane with variable sampling rates can be achieved, which overcomes the sampling restriction of FFT in the conventional angular spectrum based method. The holograms of red, green and blue component of the polygon-based object are calculated separately by using our NUFFT based method. Then the color hologram is synthesized by placing the red, green and blue component hologram in sequence. The chromatic aberration caused by the wavelength difference can be solved effectively by restricting the sampling rate of the object in the calculation of each wavelength component. The computer simulation shows the feasibility of our method in calculating the color hologram of polygon-based object. The 3D object can be displayed in color with adjustable size and no chromatic aberration in holographic display system, which can be considered as an important application in the colorful holographic three-dimensional display.

  5. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  6. Electrowetting-based adaptive vari-focal liquid lens array for 3D display

    NASA Astrophysics Data System (ADS)

    Won, Yong Hyub

    2014-10-01

    Electrowetting is a phenomenon that can control the surface tension of liquid when a voltage is applied. This paper introduces the fabrication method of liquid lens array by the electrowetting phenomenon. The fabricated 23 by 23 lens array has 1mm diameter size with 1.6 mm interval distance between adjacent lenses. The diopter of each lens was - 24~27 operated at 0V to 50V. The lens array chamber fabricated by Deep Reactive-Ion Etching (DRIE) is deposited with IZO and parylene C and tantalum oxide. To prevent water penetration and achieve high dielectric constant, parylene C and tantalum oxide (ɛ = 23 ~ 25) are used respectively. Hydrophobic surface which enables the range of contact angle from 60 to 160 degree is coated to maximize the effect of electrowetting causing wide band of dioptric power. Liquid is injected into each lens chamber by two different ways. First way was self water-oil dosing that uses cosolvent and diffusion effect, while the second way was micro-syringe by the hydrophobic surface properties. To complete the whole process of the lens array fabrication, underwater sealing was performed using UV adhesive that does not dissolve in water. The transient time for changing from concave to convex lens was measured <33ms (at frequency of 1kHz AC voltage.). The liquid lens array was tested unprecedentedly for integral imaging to achieve more advanced depth information of 3D image.

  7. Time-sequential autostereoscopic 3-D display with a novel directional backlight system based on volume-holographic optical elements.

    PubMed

    Hwang, Yong Seok; Bruder, Friedrich-Karl; Fäcke, Thomas; Kim, Seung-Cheol; Walze, Günther; Hagen, Rainer; Kim, Eun-Soo

    2014-04-21

    A novel directional backlight system based on volume-holographic optical elements (VHOEs) is demonstrated for time-sequential autostereoscopic three-dimensional (3-D) flat-panel displays. Here, VHOEs are employed to control the direction of light for a time-multiplexed display for each of the left and the right view. Those VHOEs are fabricated by recording interference patterns between collimated reference beams and diverging object beams for each of the left and right eyes on the volume holographic recording material. For this, self-developing photopolymer films (Bayfol® HX) were used, since those simplify the manufacturing process of VHOEs substantially. Here, the directional lights are similar to the collimated reference beams that were used to record the VHOEs and create two diffracted beams similar to the object beams used for recording the VHOEs. Then, those diffracted beams read the left and right images alternately shown on the LCD panel and form two converging viewing zones in front of the user's eyes. By this he can perceive the 3-D image. Theoretical predictions and experimental results are presented and the performance of the developed prototype is shown. PMID:24787867

  8. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  9. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  10. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  11. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size. PMID:25961382

  12. Recent developments in stereoscopic and holographic 3D display technologies

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri

    2014-06-01

    Currently, there is increasing interest in the development of high performance 3D display technologies to support a variety of applications including medical imaging, scientific visualization, gaming, education, entertainment, air traffic control and remote operations in 3D environments. In this paper we will review the attributes of the various 3D display technologies including stereoscopic and holographic 3D, human factors issues of stereoscopic 3D, the challenges in realizing Holographic 3D displays and the recent progress in these technologies.

  13. Transparent 3D display for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Hong, Jisoo

    2012-11-01

    Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.

  14. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  15. 3D optical see-through head-mounted display based augmented reality system and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenliang; Weng, Dongdong; Liu, Yue; Xiang, Li

    2015-07-01

    The combination of health and entertainment becomes possible due to the development of wearable augmented reality equipment and corresponding application software. In this paper, we implemented a fast calibration extended from SPAAM for an optical see-through head-mounted display (OSTHMD) which was made in our lab. During the calibration, the tracking and recognition techniques upon natural targets were used, and the spatial corresponding points had been set in dispersed and well-distributed positions. We evaluated the precision of this calibration, in which the view angle ranged from 0 degree to 70 degrees. Relying on the results above, we calculated the position of human eyes relative to the world coordinate system and rendered 3D objects in real time with arbitrary complexity on OSTHMD, which accurately matched the real world. Finally, we gave the degree of satisfaction about our device in the combination of entertainment and prevention of cervical vertebra diseases through user feedbacks.

  16. 3-D TV and display using multiview

    NASA Astrophysics Data System (ADS)

    Son, Jung-Young; Kim, Shin-Hwan; Park, Min-Chul; Kim, Sung-Kyu

    2008-04-01

    The current multiview 3 dimensional imaging systems are mostly based on a multiview image set. Depending on the methods of presenting and arranging the image set on a display panel or a screen, the systems are basically classified into contact- and projection-type. The contact-type is further classified into MV(Multiview), IP(Integral Photography), Multiple Image, FLA(Focused light array) and Tracking. The depth cue provided by those types are both binocular and motion parallaxes. The differences between the methods in a same type can only be identified by the composition of images projected to viewer eyes at the viewing regions.

  17. A Desktop Computer Based Workstation for Display and Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Erickson, Bradley J.; Robb, Richard A.

    1987-01-01

    While great advances have been made in developing new and better ways to produce medical images, the technology to efficiently display and analyze them has lagged. This paper describes design considerations and development of a workstation based on an IBM PC/AT for the analysis of three and four dimensional medical image data. ImagesFigure 1Figure 2Figure 3Figure 4Figure 5Figure 6Figure 7Figure 8Figure 9

  18. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  19. 3D touchable holographic light-field display.

    PubMed

    Yamaguchi, Masahiro; Higashida, Ryo

    2016-01-20

    We propose a new type of 3D user interface: interaction with a light field reproduced by a 3D display. The 3D display used in this work reproduces a 3D light field, and a real image can be reproduced in midair between the display and the user. When using a finger to touch the real image, the light field from the display will scatter. Then, the 3D touch sensing is realized by detecting the scattered light by a color camera. In the experiment, the light-field display is constructed with a holographic screen and a projector; thus, a preliminary implementation of a 3D touch is demonstrated. PMID:26835952

  20. Projection type transparent 3D display using active screen

    NASA Astrophysics Data System (ADS)

    Kamoshita, Hiroki; Yendo, Tomohiro

    2015-05-01

    Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.

  1. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  2. A low-resolution 3D holographic volumetric display

    NASA Astrophysics Data System (ADS)

    Khan, Javid; Underwood, Ian; Greenaway, Alan; Halonen, Mikko

    2010-05-01

    A simple low resolution volumetric display is presented, based on holographic volume-segments. The display system comprises a proprietary holographic screen, laser projector, associated optics plus a control unit. The holographic screen resembles a sheet of frosted glass about A4 in size (20x30cm). The holographic screen is rear-illuminated by the laser projector, which is in turn driven by the controller, to produce simple 3D images that appear outside the plane of the screen. A series of spatially multiplexed and interleaved interference patterns are pre-encoded across the surface of the holographic screen. Each illumination pattern is capable of reconstructing a single holographic volume-segment. Up to nine holograms are multiplexed on the holographic screen in a variety of configurations including a series of numeric and segmented digits. The demonstrator has good results under laboratory conditions with moving colour 3D images in front of or behind the holographic screen.

  3. Will true 3d display devices aid geologic interpretation. [Mirage

    SciTech Connect

    Nelson, H.R. Jr.

    1982-04-01

    A description is given of true 3D display devices and techniques that are being evaluated in various research laboratories around the world. These advances are closely tied to the expected application of 3D display devices as interpretational tools for explorationists. 34 refs.

  4. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  5. Optical characterization and measurements of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Salmimaa, Marja; Järvenpää, Toni

    2008-04-01

    3D or autostereoscopic display technologies offer attractive solutions for enriching the multimedia experience. However, both characterization and comparison of 3D displays have been challenging when the definitions for the consistent measurement methods have been lacking and displays with similar specifications may appear quite different. Earlier we have investigated how the optical properties of autostereoscopic (3D) displays can be objectively measured and what are the main characteristics defining the perceived image quality. In this paper the discussion is extended to cover the viewing freedom (VF) and the definition for the optimum viewing distance (OVD) is elaborated. VF is the volume inside which the eyes have to be to see an acceptable 3D image. Characteristics limiting the VF space are proposed to be 3D crosstalk, luminance difference and color difference. Since the 3D crosstalk can be presumed to be dominating the quality of the end user experience and in our approach is forming the basis for the calculations of the other optical parameters, the reliability of the 3D crosstalk measurements is investigated. Furthermore the effect on the derived VF definition is evaluated. We have performed comparison 3D crosstalk measurements with different measurement device apertures and the effect of different measurement geometry on the results on actual 3D displays is reported.

  6. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  7. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  8. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  9. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  10. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  11. Development of an automultiscopic true 3D display (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Kurtz, Russell M.; Pradhan, Ranjit D.; Aye, Tin M.; Yu, Kevin H.; Okorogu, Albert O.; Chua, Kang-Bin; Tun, Nay; Win, Tin; Schindler, Axel

    2005-05-01

    True 3D displays, whether generated by volume holography, merged stereopsis (requiring glasses), or autostereoscopic methods (stereopsis without the need for special glasses), are useful in a great number of applications, ranging from training through product visualization to computer gaming. Holography provides an excellent 3D image but cannot yet be produced in real time, merged stereopsis results in accommodation-convergence conflict (where distance cues generated by the 3D appearance of the image conflict with those obtained from the angular position of the eyes) and lacks parallax cues, and autostereoscopy produces a 3D image visible only from a small region of space. Physical Optics Corporation is developing the next step in real-time 3D displays, the automultiscopic system, which eliminates accommodation-convergence conflict, produces 3D imagery from any position around the display, and includes true image parallax. Theory of automultiscopic display systems is presented, together with results from our prototype display, which produces 3D video imagery with full parallax cues from any viewing direction.

  12. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention

  13. A Comparison of the Perceptual Benefits of Linear Perspective and Physically-Based Illumination for Display of Dense 3D Streamtubes

    SciTech Connect

    Banks, David C

    2008-01-01

    Large datasets typically contain coarse features comprised of finer sub-features. Even if the shapes of the small structures are evident in a 3D display, the aggregate shapes they suggest may not be easily inferred. From previous studies in shape perception, the evidence has not been clear whether physically-based illumination confers any advantage over local illumination for understanding scenes that arise in visualization of large data sets that contain features at two distinct scales. In this paper we show that physically- based illumination can improve the perception for some static scenes of complex 3D geometry from flow fields. We perform human- subjects experiments to quantify the effect of physically-based illumination on participant performance for two tasks: selecting the closer of two streamtubes from a field of tubes, and identifying the shape of the domain of a flow field over different densities of tubes. We find that physically-based illumination influences participant performance as strongly as perspective projection, suggesting that physically-based illumination is indeed a strong cue to the layout of complex scenes. We also find that increasing the density of tubes for the shape identification task improved participant performance under physically-based illumination but not under the traditional hardware-accelerated illumination model.

  14. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  15. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  16. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  17. High-definition 3D display for training applications

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy

    2010-04-01

    In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.

  18. 3D head mount display with single panel

    NASA Astrophysics Data System (ADS)

    Wang, Yuchang; Huang, Junejei

    2014-09-01

    The head mount display for entertainment usually requires light weight. But in the professional application has more requirements. The image quality, field of view (FOV), color gamut, response and life time are considered items, too. A head mount display based on 1-chip TI DMD spatial light modulator is proposed. The multiple light sources and splitting images relay system are the major design tasks. The relay system images the object (DMD) into two image planes to crate binocular vision. The 0.65 inch 1080P DMD is adopted. The relay has a good performance which includes the doublet to reduce the chromatic aberration. Some spaces are reserved for placing the mirror and adjustable mechanism. The mirror splits the rays to the left and right image plane. These planes correspond to the eyepieces objects and image to eyes. A changeable mechanism provides the variable interpupillary distance (IPD). The folding optical path makes sure that the HMD center of gravity is close to the head and prevents the uncomfortable downward force being applied to head or orbit. Two RGB LED assemblies illuminate to the DMD in different angle. The light is highly collimated. The divergence angle is small enough such that one LED ray would only enters to the correct eyepiece. This switching is electronic controlled. There is no moving part to produce vibration and fast switch would be possible. Two LED synchronize with 3D video sync by a driving board which also controls the DMD. When the left eye image is displayed on DMD, the LED for left optical path turns on. Vice versa for right image and 3D scene is accomplished.

  19. In memoriam: Fumio Okano, innovator of 3D display

    NASA Astrophysics Data System (ADS)

    Arai, Jun

    2014-06-01

    Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.

  20. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  1. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  2. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema. PMID:25503026

  3. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  4. True 3D displays for avionics and mission crewstations

    NASA Astrophysics Data System (ADS)

    Sholler, Elizabeth A.; Meyer, Frederick M.; Lucente, Mark E.; Hopper, Darrel G.

    1997-07-01

    3D threat projection has been shown to decrease the human recognition time for events, especially for a jet fighter pilot or C4I sensor operator when the advantage of realization that a hostile threat condition exists is the basis of survival. Decreased threat recognition time improves the survival rate and results from more effective presentation techniques, including the visual cue of true 3D (T3D) display. The concept of 'font' describes the approach adopted here, but whereas a 2D font comprises pixel bitmaps, a T3D font herein comprises a set of hologram bitmaps. The T3D font bitmaps are pre-computed, stored, and retrieved as needed to build images comprising symbols and/or characters. Human performance improvement, hologram generation for a T3D symbol font, projection requirements, and potential hardware implementation schemes are described. The goal is to employ computer-generated holography to create T3D depictions of a dynamic threat environments using fieldable hardware.

  5. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  6. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  7. Tangible holography: adding synthetic touch to 3D display

    NASA Astrophysics Data System (ADS)

    Plesniak, Wendy J.; Klug, Michael A.

    1997-04-01

    Just as we expect holographic technology to become a more pervasive and affordable instrument of information display, so too will high fidelity force-feedback devices. We describe a testbed system which uses both of these technologies to provide simultaneous, coincident visuo- haptic spatial display of a 3D scene. The system provides the user with a stylus to probe a geometric model that is also presented visually in full parallax. The haptics apparatus is a six degree-of-freedom mechanical device with servomotors providing active force display. This device is controlled by a free-running server that simulates static geometric models with tactile and bulk material properties, all under ongoing specification by a client program. The visual display is a full parallax edge-illuminated holographic stereogram with a wide angle of view. Both simulations, haptic and visual, represent the same scene. The haptic and visual displays are carefully scaled and aligned to provide coincident display, and together they permit the user to explore the model's 3D shape, texture and compliance.

  8. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  9. 3D display design concept for cockpit and mission crewstations

    NASA Astrophysics Data System (ADS)

    Thayn, Jarod R.; Ghrayeb, Joseph; Hopper, Darrel G.

    1999-08-01

    Simple visual cues increase human awareness and perception and decrease reaction times. Humans are visual beings requiring visual cues to warn them of impending danger especially on combat aviation. The simplest cues are those that allow the individual to immerse themselves in the situations to which they must respond. Two-dimensional (2-D) display technology has real limits on what types of information and how much information it can present to the viewer without becoming disorienting or confusing. True situational awareness requires a transition from 2-D to three-dimensional (3-D) display technology.

  10. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  11. Step barrier system multiview glassless 3D display

    NASA Astrophysics Data System (ADS)

    Mashitani, Ken; Hamagishi, Goro; Higashino, Masahiro; Ando, Takahisa; Takemoto, Satoshi

    2004-05-01

    The step barrier technology with multiple parallax images has overcome the problem of conventional parallax barrier system that the image quality of each image deteriorates only in the horizontal direction. The step barrier distributes the resolution problem both to the horizontal and the vertical directions. The system has a simple structure, which consists of a flat-panel display and a step barrier. The apertures of the step barrier are not stripes but tiny rectangles that are arranged in the shape of stairs, and the sub-pixels of each image have the same arrangement. And three image processes for the system applicable to computer graphics and real image have been proposed. Then, two types of 3-D displays were developed, 22-inch model and 50-inch model. The 22-inch model employs a very high-definition liquid crystal display of 3840 x 2400 pixels. The number of parallax images is seven and the resolution of one image is 1646 x 800. The 50-inch model has four viewing points on the plasma display panel of 1280 x 768 pixels. It can provide stereoscopic animations and the resolution of one image is 960 x 256 pixels. Moreover, the structural or electric 2-D 3-D compatible system was developed.

  12. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  13. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  14. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645

  15. Wide-viewing-angle floating 3D display system with no 3D glasses

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Previously, the author has described a new 3D imaging technology entitled 'real depth' with several different configurations and methods of implantation. Included were several methods to 'float' images in free space. Viewers can pass their hands through the image or appear to hold it in their hands. Most implementations provide an angle of view of approximately 45 degrees. The technology produces images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. Unlike stereoscopic 3D imaging, no glasses, headgear or other viewing aids are used. In addition to providing traditional depth cues, such as perspective and background images occlusion, the technology also provides both horizontal and vertical binocular parallax producing visual accommodation and convergence which coincide. Consequently, viewing these images do not produce headaches, fatigue, or eyestrain, regardless of how long they are viewed. A method was also proposed to provide a floating image display system with a wide angle of view. Implementation of this design proved problematic, producing various image distortions. In this paper the author discloses new methods to produce aerial images with a wide angel of view and improved image quality.

  16. Crosstalk in automultiscopic 3-D displays: blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Jain, Ashish; Konrad, Janusz

    2007-02-01

    Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.

  17. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  18. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  19. Recent research results in stereo 3-D pictorial displays at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.

    1990-01-01

    Recent results from a NASA-Langley program which addressed stereo 3D pictorial displays from a comprehensive standpoint are reviewed. The program dealt with human factors issues and display technology aspects, as well as flight display applications. The human factors findings include addressing a fundamental issue challenging the application of stereoscopic displays in head-down flight applications, with the determination that stereoacuity is unaffected by the short-term use of stereo 3D displays. While stereoacuity has been a traditional measurement of depth perception abilities, it is a measure of relative depth, rather than actual depth (absolute depth). Therefore, depth perception effects based on size and distance judgments and long-term stereo exposure remain issues to be investigated. The applications of stereo 3D to pictorial flight displays within the program have repeatedly demonstrated increases in pilot situational awareness and task performance improvements. Moreover, these improvements have been obtained within the constraints of the limited viewing volume available with conventional stereo displays. A number of stereo 3D pictorial display applications are described, including recovery from flight-path offset, helicopter hover, and emulated helmet-mounted display.

  20. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  1. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  2. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Astrophysics Data System (ADS)

    Ericson, Mark; McKinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  3. Dual side transparent OLED 3D display using Gabor super-lens

    NASA Astrophysics Data System (ADS)

    Chestak, Sergey; Kim, Dae-Sik; Cho, Sung-Woo

    2015-03-01

    We devised dual side transparent 3D display using transparent OLED panel and two lenticular arrays. The OLED panel is sandwiched between two parallel confocal lenticular arrays, forming Gabor super-lens. The display provides dual side stereoscopic 3D imaging and floating image of the object, placed behind it. The floating image can be superimposed with the displayed 3D image. The displayed autostereoscopic 3D images are composed of 4 views, each with resolution 64x90 pix.

  4. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  5. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart. PMID:26074575

  6. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  7. Color and brightness uniformity compensation of a multi-projection 3D display

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Juyong; Nam, Dongkyung; Park, Du-Sik

    2015-09-01

    Light-field displays are good candidates in the field of glasses-free 3D display for showing real 3D images without decreasing the image resolution. Light-field displays can create light rays using a large number of projectors in order to express the natural 3D images. However, in light-field displays using multi-projectors, the compensation is very critical due to different characteristics and arrangement positions of each projector. In this paper, we present an enhanced 55- inch, 100-Mpixel multi-projection 3D display consisting of 96 micro projectors for immersive natural 3D viewing in medical and educational applications. To achieve enhanced image quality, color and brightness uniformity compensation methods are utilized along with an improved projector configuration design and a real-time calibration process of projector alignment. For color uniformity compensation, projected images from each projector are captured by a camera arranged in front of the screen, the number of pixels based on RGB color intensities of each captured image is analyzed, and the distributions of RGB color intensities are adjusted by using the respective maximum values of RGB color intensities. For brightness uniformity compensation, each light-field ray emitted from a screen pixel is modeled by a radial basis function, and compensating weights of each screen pixel are calculated and transferred to the projection images by the mapping relationship between the screen and projector coordinates. Finally, brightness compensated images are rendered for each projector. Consequently, the display shows improved color and brightness uniformity, and consistent, exceptional 3D image quality.

  8. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display. PMID:27136814

  9. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987

  10. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  11. Active and interactive floating image display using holographic 3D images

    NASA Astrophysics Data System (ADS)

    Morii, Tsutomu; Sakamoto, Kunio

    2006-08-01

    We developed a prototype tabletop holographic display system. This system consists of the object recognition system and the spatial imaging system. In this paper, we describe the recognition system using an RFID tag and the 3D display system using a holographic technology. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1,2,3. The purpose of this paper is to propose the interactive system using these 3D imaging technologies. In this paper, the authors describe the interactive tabletop 3D display system. The observer can view virtual images when the user puts the special object on the display table. The key technologies of this system are the object recognition system and the spatial imaging display.

  12. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  13. IPMC actuator array as a 3D haptic display

    NASA Astrophysics Data System (ADS)

    Nakano, Masanori; Mazzone, Andrea; Piffaretti, Filippo; Gassert, Roger; Nakao, Masayuki; Bleuler, Hannes

    2005-05-01

    Based on the concept of Mazzone et al., we have designed a novel system to be used simultaneously as an input and output device for designing, presenting, or recognizing objects in three-dimensional space. Unlike state of the art stereoscopic display technologies that generate a virtual image of a three-dimensional object, the proposed system, a "digital clay" like device, physically imitates the desired object. The object can not only be touched and explored intuitively but also deform itself physically. In order to succeed in developing such a deformable structure, self-actuating ionic polymer-metal composite (IPMC) materials are proposed. IPMC is a type of electro active polymer (EAP) and has recently been drawing much attention. It has high force to weight ratio and shape flexibility, making it ideal for robotic applications. This paper introduces the first steps and results in the attempt of developing such a structure. A strip consisting of four actuators arranged in line was fabricated and evaluated, showing promising capabilities in deforming two-dimensionally. A simple model to simulate the deformation of an IPMC actuator using finite element methods (FEM) is also proposed and compared with the experimental results. The model can easily be implemented into computer aided engineering (CAE) software. This will expand the application possibilities of IPMCs. Furthermore, a novel method for creating multiple actuators on one membrane with a laser machining tool is introduced.

  14. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  15. Development and test of a low-cost 3D display for small aircraft

    NASA Astrophysics Data System (ADS)

    Sachs, Gottfried; Sperl, Roman; Nothnagel, Klaus

    2002-07-01

    A low-cost 3D-display and navigation system providing guidance information in a 3-dimensional format is described. The system including a LC display, a PC based computer for generating the 3-dimensional guidance information, a navigation system providing D/GPS and inertial sensor based position and attitude data was realized using Commercial-off-the-Shelf components. An efficient computer software has been developed to generate the 3-dimensional guidance information with a high update rate. The guidance concept comprises an image of the outside world as well as a presentation of the command flight path, a predictor and other guidance elements in a 3-dimensional format.

  16. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  17. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  18. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  19. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  20. Multiview image integration system for glassless 3D display

    NASA Astrophysics Data System (ADS)

    Ando, Takahisa; Mashitani, Ken; Higashino, Masahiro; Kanayama, Hideyuki; Murata, Haruhiko; Funazou, Yasuo; Sakamoto, Naohisa; Hazama, Hiroshi; Ebara, Yasuo; Koyamada, Koji

    2005-03-01

    We have developed a multi-view image integration system, which combines seven parallax video images into a single video image so that it fits the parallax barrier. The apertures of this barrier are not stripes but tiny rectangles that are arranged in the shape of stairs. Commodity hardware is used to satisfy a specification which requires that the resolution of each parallax video image is SXGA(1645×800 pixel resolution), the resulting integrated image is QUXGA-W(3840×2400 pixel resolution), and the frame rate is fifteen frames per second. The point is that the system can provide with QUXGA-W video image, which corresponds to 27MB, at 15fps, that is about 2Gbps. Using the integration system and a Liquid Crystal Display with the parallax barrier, we can enjoy an immersive live video image which supports seven viewpoints without special glasses. In addition, since the system can superimpose the CG images of the relevant seven viewpoints into the live video images, it is possible to communicate with remote users by sharing a virtual object.

  1. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  2. Monocular 3D see-through head-mounted display via complex amplitude modulation.

    PubMed

    Gao, Qiankun; Liu, Juan; Han, Jian; Li, Xin

    2016-07-25

    The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display. PMID:27464184

  3. Study on basic problems in real-time 3D holographic display

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Liu, Juan; Wang, Yongtian; Pan, Yijie; Li, Xin

    2013-05-01

    In recent years, real-time three-dimensional (3D) holographic display has attracted more and more attentions. Since a holographic display can entirely reconstruct the wavefront of an actual 3D scene, it can provide all the depth cues for human eye's observation and perception, and it is believed to be the most promising technology for future 3D display. However, there are several unsolved basic problems for realizing large-size real-time 3D holographic display with a wide field of view. For examples, commercial pixelated spatial light modulators (SLM) always lead to zero-order intensity distortion; 3D holographic display needs a huge number of sampling points for the actual objects or scenes, resulting in enormous computational time; The size and the viewing zone of the reconstructed 3D optical image are limited by the space bandwidth product of the SLM; Noise from the coherent light source as well as from the system severely degrades the quality of the 3D image; and so on. Our work is focused on these basic problems, and some initial results are presented, including a technique derived theoretically and verified experimentally to eliminate the zero-order beam caused by a pixelated phase-only SLM; a method to enlarge the reconstructed 3D image and shorten the reconstruction distance using a concave reflecting mirror; and several algorithms to speed up the calculation of computer generated holograms (CGH) for the display.

  4. 3D Navigation and Integrated Hazard Display in Advanced Avionics: Workload, Performance, and Situation Awareness

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Alexander, Amy L.

    2004-01-01

    We examined the ability for pilots to estimate traffic location in an Integrated Hazard Display, and how such estimations should be measured. Twelve pilots viewed static images of traffic scenarios and then estimated the outside world locations of queried traffic represented in one of three display types (2D coplanar, 3D exocentric, and split-screen) and in one of four conditions (display present/blank crossed with outside world present/blank). Overall, the 2D coplanar display best supported both vertical (compared to 3D) and lateral (compared to split-screen) traffic position estimation performance. Costs of the 3D display were associated with perceptual ambiguity. Costs of the split screen display were inferred to result from inappropriate attention allocation. Furthermore, although pilots were faster in estimating traffic locations when relying on memory, accuracy was greatest when the display was available.

  5. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  6. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  7. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed. PMID:27137284

  8. Flatbed-type 3D display systems using integral imaging method

    NASA Astrophysics Data System (ADS)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  9. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  10. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  11. Clinical evaluation of accommodation and ocular surface stability relavant to visual asthenopia with 3D displays

    PubMed Central

    2014-01-01

    Background To validate the association between accommodation and visual asthenopia by measuring objective accommodative amplitude with the Optical Quality Analysis System (OQAS®, Visiometrics, Terrassa, Spain), and to investigate associations among accommodation, ocular surface instability, and visual asthenopia while viewing 3D displays. Methods Fifteen normal adults without any ocular disease or surgical history watched the same 3D and 2D displays for 30 minutes. Accommodative ability, ocular protection index (OPI), and total ocular symptom scores were evaluated before and after viewing the 3D and 2D displays. Accommodative ability was evaluated by the near point of accommodation (NPA) and OQAS to ensure reliability. The OPI was calculated by dividing the tear breakup time (TBUT) by the interblink interval (IBI). The changes in accommodative ability, OPI, and total ocular symptom scores after viewing 3D and 2D displays were evaluated. Results Accommodative ability evaluated by NPA and OQAS, OPI, and total ocular symptom scores changed significantly after 3D viewing (p = 0.005, 0.003, 0.006, and 0.003, respectively), but yielded no difference after 2D viewing. The objective measurement by OQAS verified the decrease of accommodative ability while viewing 3D displays. The change of NPA, OPI, and total ocular symptom scores after 3D viewing had a significant correlation (p < 0.05), implying direct associations among these factors. Conclusions The decrease of accommodative ability after 3D viewing was validated by both subjective and objective methods in our study. Further, the deterioration of accommodative ability and ocular surface stability may be causative factors of visual asthenopia in individuals viewing 3D displays. PMID:24612686

  12. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  13. 3D printed PLA-based scaffolds

    PubMed Central

    Serra, Tiziano; Mateos-Timoneda, Miguel A; Planell, Josep A; Navarro, Melba

    2013-01-01

    Rapid prototyping (RP), also known as additive manufacturing (AM), has been well received and adopted in the biomedical field. The capacity of this family of techniques to fabricate customized 3D structures with complex geometries and excellent reproducibility has revolutionized implantology and regenerative medicine. In particular, nozzle-based systems allow the fabrication of high-resolution polylactic acid (PLA) structures that are of interest in regenerative medicine. These 3D structures find interesting applications in the regenerative medicine field where promising applications including biodegradable templates for tissue regeneration purposes, 3D in vitro platforms for studying cell response to different scaffolds conditions and for drug screening are considered among others. Scaffolds functionality depends not only on the fabrication technique, but also on the material used to build the 3D structure, the geometry and inner architecture of the structure, and the final surface properties. All being crucial parameters affecting scaffolds success. This Commentary emphasizes the importance of these parameters in scaffolds’ fabrication and also draws the attention toward the versatility of these PLA scaffolds as a potential tool in regenerative medicine and other medical fields. PMID:23959206

  14. Artifact reduction in lenticular multiscopic 3D displays by means of anti-alias filtering

    NASA Astrophysics Data System (ADS)

    Konrad, Janusz; Agniel, Philippe

    2003-05-01

    This paper addresses the issue of artifact visibility in automultiscopic 3-D lenticular displays. A straightforward extension of the two-view lenticular autostereoscopic principle to M views results in an M-fold loss of horizontal resolution due to the subsampling needed to properly multiplex the views. In order to circumvent the imbalance between the horizontal and vertical resolution, a tilt can be applied to the lenticules to orient them at a small angle to the vertical direction, as is done in the SynthaGram display from Stereographics Corp. In either case, to avoid aliasing the subsampling should be preceded by suitable lowpass pre-filtering. Although for purely vertical lenticules a sufficiently narrowband lowpass horizontal filtering suffices, the situation is more complicated for diagonal lenticules; the subsampling of each view is no more orthogonal, and more complex sampling models need to be considered. Based on multidimensional sampling theory, we have studied multiview sampling models based on lattices. These models approximate pixel positions on a lenticular automultiscopic display and lead to optimal anti-alias filters. In this paper, we report results for a separable approximation to non-separable 2-D anti-alias filters based on the assumption that the lenticule slant is small. We have carried out experiments on a variety of images, and different filter bandwidths. We have observed that the theoretically-optimal bandwidth is too restrictive; aliasing artifacts disappear, but some image details are lost as well. Somewhat wider bandwidths result in images with almost no aliasing and largely preserved detail. For subjectively-optimized filters, the improvements, although localized, are clear and enhance the 3-D viewing experience.

  15. Color decomposition method for multiprimary display using 3D-LUT in linearized LAB space

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Woo; Kim, Yun-Tae; Cho, Yang-Ho; Park, Kee-Hyon; Choe, Wonhee; Ha, Yeong-Ho

    2005-01-01

    This paper proposes a color decomposition method for a multi-primary display (MPD) using a 3-dimensional look-up-table (3D-LUT) in linearized LAB space. The proposed method decomposes the conventional three primary colors into multi-primary control values for a display device under the constraints of tristimulus matching. To reproduce images on an MPD, the color signals are estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, linearized LAB space is used due to its linearity and additivity in color conversion. First, the proposed method constructs a 3-D LUT containing gamut boundary information to calculate the color signals for the MPD in linearized LAB space. For the image reproduction, standard RGB or CIEXYZ is transformed to linearized LAB, then the hue and chroma are computed with reference to the 3D-LUT. In linearized LAB space, the color signals for a gamut boundary point are calculated to have the same lightness and hue as the input point. Also, the color signals for a point on the gray axis are calculated to have the same lightness as the input point. Based on the gamut boundary points and input point, the color signals for the input point are then obtained using the chroma ratio divided by the chroma of the gamut boundary point. In particular, for a change of hue, the neighboring boundary points are also employed. As a result, the proposed method guarantees color signal continuity and computational efficiency, and requires less memory.

  16. Color decomposition method for multiprimary display using 3D-LUT in linearized LAB space

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Woo; Kim, Yun-Tae; Cho, Yang-Ho; Park, Kee-Hyon; Choe, Wonhee; Ha, Yeong-Ho

    2004-12-01

    This paper proposes a color decomposition method for a multi-primary display (MPD) using a 3-dimensional look-up-table (3D-LUT) in linearized LAB space. The proposed method decomposes the conventional three primary colors into multi-primary control values for a display device under the constraints of tristimulus matching. To reproduce images on an MPD, the color signals are estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, linearized LAB space is used due to its linearity and additivity in color conversion. First, the proposed method constructs a 3-D LUT containing gamut boundary information to calculate the color signals for the MPD in linearized LAB space. For the image reproduction, standard RGB or CIEXYZ is transformed to linearized LAB, then the hue and chroma are computed with reference to the 3D-LUT. In linearized LAB space, the color signals for a gamut boundary point are calculated to have the same lightness and hue as the input point. Also, the color signals for a point on the gray axis are calculated to have the same lightness as the input point. Based on the gamut boundary points and input point, the color signals for the input point are then obtained using the chroma ratio divided by the chroma of the gamut boundary point. In particular, for a change of hue, the neighboring boundary points are also employed. As a result, the proposed method guarantees color signal continuity and computational efficiency, and requires less memory.

  17. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    PubMed Central

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-01-01

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910

  18. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance

    NASA Astrophysics Data System (ADS)

    Qiu, Jimmy; Hope, Andrew J.; Cho, B. C. John; Sharpe, Michael B.; Dickie, Colleen I.; DaCosta, Ralph S.; Jaffray, David A.; Weersink, Robert A.

    2012-10-01

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ˜2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  19. System crosstalk measurement of a time-sequential 3D display using ideal shutter glasses

    NASA Astrophysics Data System (ADS)

    Chen, Fu-Hao; Huang, Kuo-Chung; Lin, Lang-Chin; Chou, Yi-Heng; Lee, Kuen

    2011-03-01

    The market of stereoscopic 3D TV grows up fast recently; however, for 3D TV really taking off, the interoperability of shutter glasses (SG) to view different TV sets must be solved, so we developed a measurement method with ideal shutter glasses (ISG) to separate time-sequential stereoscopic displays and SG. For measuring the crosstalk from time-sequential stereoscopic 3D displays, the influences from SG must be eliminated. The advantages are that the sources to crosstalk are distinguished, and the interoperability of SG is broadened. Hence, this paper proposed ideal shutter glasses, whose non-ideal properties are eliminated, as a platform to evaluate the crosstalk purely from the display. In the ISG method, the illuminance of the display was measured in time domain to analyze the system crosstalk SCT of the display. In this experiment, the ISG method was used to measure SCT with a high-speed-response illuminance meter. From the time-resolved illuminance signals, the slow time response of liquid crystal leading to SCT is visualized and quantified. Furthermore, an intriguing phenomenon that SCT measured through SG increases with shortening view distance was observed, and it may arise from LC leakage of the display and shutter leakage at large view angle. Thus, we measured how LC and shutter leakage depending on view angle and verified our argument. Besides, we used the ISG method to evaluate two displays.

  20. Virtual reality 3D headset based on DMD light modulators

    SciTech Connect

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  1. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  2. Low-cost approach of a 3D display for general aviation aircraft

    NASA Astrophysics Data System (ADS)

    Sachs, Gottfried; Sperl, Roman; Karl, Wunibald

    2001-08-01

    A low cost 3D-display and navigation system is described which presents guidance information in a 3-dimensional format to the pilot. For achieving the low cost goal, Commercial-off-the-Shelf components are used. The visual information provided by the 3D-display includes a presentation of the future flight path and other guidance elements as well as an image of the outside world. For generating the displayed information, a PC will be used. An appropriate computer software is available to generate the displayed information in real-time with an adequately high update rate. Precision navigation data which is required for accurately adjusting the displayed guidance information are provided by an integrated low cost navigation system. This navigation system consists of a differential global positioning system and an inertial measurement unit. Data from the navigation system is fed into an onboard-computer, using terrain elevation and feature analysis data to generate a synthetic image of the outside world. The system is intended to contribute to the safety of General Aviation aircraft, providing an affordable guidance and navigation aid for this type of aircraft. The low cost 3D display and navigation system will be installed in a two-seat Grob 109B aircraft which is operated by the Institute of Flight Mechanics and Flight Control of the Technische Universitchen as a research vehicle.

  3. On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV

    NASA Astrophysics Data System (ADS)

    Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.

    2011-03-01

    Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.

  4. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  5. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  6. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  7. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  8. Coarse integral holography approach for real 3D color video displays.

    PubMed

    Chen, J S; Smithwick, Q Y J; Chu, D P

    2016-03-21

    A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components. PMID:27136858

  9. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  10. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  11. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  12. Characterizing the effects of droplines on target acquisition performance on a 3-D perspective display

    NASA Technical Reports Server (NTRS)

    Liao, Min-Ju; Johnson, Walter W.

    2004-01-01

    The present study investigated the effects of droplines on target acquisition performance on a 3-D perspective display in which participants were required to move a cursor into a target cube as quickly as possible. Participants' performance and coordination strategies were characterized using both Fitts' law and acquisition patterns of the 3 viewer-centered target display dimensions (azimuth, elevation, and range). Participants' movement trajectories were recorded and used to determine movement times for acquisitions of the entire target and of each of its display dimensions. The goodness of fit of the data to a modified Fitts function varied widely among participants, and the presence of droplines did not have observable impacts on the goodness of fit. However, droplines helped participants navigate via straighter paths and particularly benefited range dimension acquisition. A general preference for visually overlapping the target with the cursor prior to capturing the target was found. Potential applications of this research include the design of interactive 3-D perspective displays in which fast and accurate selection and manipulation of content residing at multiple ranges may be a challenge.

  13. A guide for human factors research with stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Pinkus, Alan R.

    2015-05-01

    In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.

  14. Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang

    2014-08-01

    We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.

  15. STAR3D: a stack-based RNA 3D structural alignment tool

    PubMed Central

    Ge, Ping; Zhang, Shaojie

    2015-01-01

    The various roles of versatile non-coding RNAs typically require the attainment of complex high-order structures. Therefore, comparing the 3D structures of RNA molecules can yield in-depth understanding of their functional conservation and evolutionary history. Recently, many powerful tools have been developed to align RNA 3D structures. Although some methods rely on both backbone conformations and base pairing interactions, none of them consider the entire hierarchical formation of the RNA secondary structure. One of the major issues is that directly applying the algorithms of matching 2D structures to the 3D coordinates is particularly time-consuming. In this article, we propose a novel RNA 3D structural alignment tool, STAR3D, to take into full account the 2D relations between stacks without the complicated comparison of secondary structures. First, the 3D conserved stacks in the inputs are identified and then combined into a tree-like consensus. Afterward, the loop regions are compared one-to-one in accordance with their relative positions in the consensus tree. The experimental results show that the prediction of STAR3D is more accurate for both non-homologous and homologous RNAs than other state-of-the-art tools with shorter running time. PMID:26184875

  16. STAR3D: a stack-based RNA 3D structural alignment tool.

    PubMed

    Ge, Ping; Zhang, Shaojie

    2015-11-16

    The various roles of versatile non-coding RNAs typically require the attainment of complex high-order structures. Therefore, comparing the 3D structures of RNA molecules can yield in-depth understanding of their functional conservation and evolutionary history. Recently, many powerful tools have been developed to align RNA 3D structures. Although some methods rely on both backbone conformations and base pairing interactions, none of them consider the entire hierarchical formation of the RNA secondary structure. One of the major issues is that directly applying the algorithms of matching 2D structures to the 3D coordinates is particularly time-consuming. In this article, we propose a novel RNA 3D structural alignment tool, STAR3D, to take into full account the 2D relations between stacks without the complicated comparison of secondary structures. First, the 3D conserved stacks in the inputs are identified and then combined into a tree-like consensus. Afterward, the loop regions are compared one-to-one in accordance with their relative positions in the consensus tree. The experimental results show that the prediction of STAR3D is more accurate for both non-homologous and homologous RNAs than other state-of-the-art tools with shorter running time. PMID:26184875

  17. High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display.

    PubMed

    Chang, Yu-Cheng; Jen, Tai-Hsiang; Ting, Chih-Hung; Huang, Yi-Pai

    2014-02-10

    A 2D/3D switchable and rotatable autostereoscopic display using a high-resistance liquid-crystal (Hi-R LC) lens array is investigated in this paper. Using high-resistance layers in an LC cell, a gradient electric-field distribution can be formed, which can provide a better lens-like shape of the refractive-index distribution. The advantages of the Hi-R LC lens array are its 2D/3D switchability, rotatability (in the horizontal and vertical directions), low driving voltage (~2 volts) and fast response (~0.6 second). In addition, the Hi-R LC lens array requires only a very simple fabrication process. PMID:24663563

  18. Integration of multiple view plus depth data for free viewpoint 3D display

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Yoshida, Yuko; Kawamoto, Tetsuya; Fujii, Toshiaki; Mase, Kenji

    2014-03-01

    This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.

  19. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  20. Sound localization with head movement: implications for 3-d audio displays

    PubMed Central

    McAnally, Ken I.; Martin, Russell L.

    2014-01-01

    Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy. PMID:25161605

  1. Implementation of real-time 3D image communication system using stereoscopic imaging and display scheme

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Chul; Kim, Dong-Kyu; Ko, Jung-Hwan; Kim, Eun-Soo

    2004-11-01

    In this paper, a new stereoscopic 3D imaging communication system for real-time teleconferencing application is implemented by using IEEE 1394 digital cameras, Intel Xeon server computer system and Microsoft"s DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate. In the proposed system, two-view images are captured by using two digital cameras and processed in the Intel Xeon server computer system. And then, disparity data is extracted from them and transmitted to the client system with the left image through an information network and in the recipient two-view images are reconstructed and displayed on the stereoscopic 3D display system. The program for controlling the overall system is developed using the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display stereoscopic images in real-time with a full-color of 16 bits and a frame rate of 15fps.

  2. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  3. Investigation of a 3D head-mounted projection display using retro-reflective screen.

    PubMed

    Héricz, Dalma; Sarkadi, Tamás; Lucza, Viktor; Kovács, Viktor; Koppa, Pál

    2014-07-28

    We propose a compact head-worn 3D display which provides glasses-free full motion parallax. Two picoprojectors placed on the viewer's head project images on a retro-reflective screen that reflects left and right images to the appropriate eyes of the viewer. The properties of different retro-reflective screen materials have been investigated, and the key parameters of the projection - brightness and cross-talk - have been calculated. A demonstration system comprising two projectors, a screen tracking system and a commercial retro-reflective screen has been developed to test the visual quality of the proposed approach. PMID:25089403

  4. Fast-response switchable lens for 3D and wearable displays.

    PubMed

    Lee, Yun-Han; Peng, Fenglin; Wu, Shin-Tson

    2016-01-25

    We report a switchable lens in which a twisted nematic (TN) liquid crystal cell is utilized to control the input polarization. Different polarization state leads to different path length in the proposed optical system, which in turn results in different focal length. This type of switchable lens has advantages in fast response time, low operation voltage, and inherently lower chromatic aberration. Using a pixelated TN panel, we can create depth information to the selected pixels and thus add depth information to a 2D image. By cascading three such device structures together, we can generate 8 different focuses for 3D displays, wearable virtual/augmented reality, and other head mounted display devices. PMID:26832545

  5. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  6. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  7. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427

  8. Holographic display system for dynamic synthesis of 3D light fields with increased space bandwidth product.

    PubMed

    Agour, Mostafa; Falldorf, Claas; Bergmann, Ralf B

    2016-06-27

    We present a new method for the generation of a dynamic wave field with high space bandwidth product (SBP). The dynamic wave field is generated from several wave fields diffracted by a display which comprises multiple spatial light modulators (SLMs) each having a comparably low SBP. In contrast to similar approaches in stereoscopy, we describe how the independently generated wave fields can be coherently superposed. A major benefit of the scheme is that the display system may be extended to provide an even larger display. A compact experimental configuration which is composed of four phase-only SLMs to realize the coherent combination of independent wave fields is presented. Effects of important technical parameters of the display system on the wave field generated across the observation plane are investigated. These effects include, e.g., the tilt of the individual SLM and the gap between the active areas of multiple SLMs. As an example of application, holographic reconstruction of a 3D object with parallax effects is demonstrated. PMID:27410593

  9. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  10. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  11. MRI Volume Fusion Based on 3D Shearlet Decompositions.

    PubMed

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  12. MRI Volume Fusion Based on 3D Shearlet Decompositions

    PubMed Central

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  13. Microlaser-based displays

    NASA Astrophysics Data System (ADS)

    Bergstedt, Robert; Fink, Charles G.; Flint, Graham W.; Hargis, David E.; Peppler, Philipp W.

    1997-07-01

    Laser Power Corporation has developed a new type of projection display, based upon microlaser technology and a novel scan architecture, which provides the foundation for bright, extremely high resolution images. A review of projection technologies is presented along with the limitations of each and the difficulties they experience in trying to generate high resolution imagery. The design of the microlaser based projector is discussed along with the advantage of this technology. High power red, green, and blue microlasers have been designed and developed specifically for use in projection displays. These sources, in combination with high resolution, high contrast modulator, produce a 24 bit color gamut, capable of supporting the full range of real world colors. The new scan architecture, which reduces the modulation rate and scan speeds required, is described. This scan architecture, along with the inherent brightness of the laser provides the fundamentals necessary to produce a 5120 by 4096 resolution display. The brightness and color uniformity of the display is excellent, allowing for tiling of the displays with far fewer artifacts than those in a traditionally tiled display. Applications for the display include simulators, command and control centers, and electronic cinema.

  14. Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling.

    PubMed

    Jia, Jia; Liu, Juan; Jin, Guofan; Wang, Yongtian

    2014-09-20

    Occlusion culling is an important process that produces correct depth cues for observers in holographic displays, whereas current methods suffer from occlusion errors or high computational loads. We propose a fast and effective method for occlusion culling based on multiple light-point sampling planes and an inverse orthographic projection technique. Multiple light-point sampling planes are employed to remove the hidden surfaces for each direction of the view of the three-dimensional (3D) scene by forward orthographic projection, and the inverse orthographic projection technique is used to determine the effective sampling points of the 3D scene. A numerical simulation and an optical experiment are performed. The results show that this approach can realize accurate occlusion effects, smooth motion parallax, and continuous depth using low angular sampling without any extra computation costs. PMID:25322109

  15. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction

    PubMed Central

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-01-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  16. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction.

    PubMed

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-03-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  17. 3D display and image processing system for metal bellows welding

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Son, Jung-Young

    2010-04-01

    Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.

  18. Hybrid Reactor Simulation and 3-D Information Display of BWR Out-of-Phase Oscillation

    SciTech Connect

    Edwards, Robert; Huang, Zhengyu

    2001-06-17

    The real-time hybrid reactor simulation (HRS) capability of the Penn State TRIGA reactor has been expanded for boiling water reactor (BWR) out-of-phase behavior. During BWR out-of-phase oscillation half of the core can significantly oscillate out of phase with the other half, while the average power reported by the neutronic instrumentation may show a much lower amplitude for the oscillations. A description of the new HRS is given; three computers are employed to handle all the computations required, including real-time data processing and graph generation. BWR out-of-phase oscillation was successfully simulated. By adjusting the reactivity feedback gains from boiling channels to the TRIGA reactor and to the first harmonic mode power simulation, limit cycle can be generated with both reactor power and the simulated first harmonic power. A 3-D display of spatial power distributions of fundamental mode, first harmonic, and total powers over the reactor cross section is shown.

  19. Assessment of 3D Viewers for the Display of Interactive Documents in the Learning of Graphic Engineering

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia

    2012-01-01

    The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…

  20. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  1. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. PMID:26689324

  2. Inclined nanoimprinting lithography-based 3D nanofabrication

    NASA Astrophysics Data System (ADS)

    Liu, Zhan; Bucknall, David G.; Allen, Mark G.

    2011-06-01

    We report a 'top-down' 3D nanofabrication approach combining non-conventional inclined nanoimprint lithography (INIL) with reactive ion etching (RIE), contact molding and 3D metal nanotransfer printing (nTP). This integration of processes enables the production and conformal transfer of 3D polymer nanostructures of varying heights to a variety of other materials including a silicon-based substrate, a silicone stamp and a metal gold (Au) thin film. The process demonstrates the potential of reduced fabrication cost and complexity compared to existing methods. Various 3D nanostructures in technologically useful materials have been fabricated, including symmetric and asymmetric nanolines, nanocircles and nanosquares. Such 3D nanostructures have potential applications such as angle-resolved photonic crystals, plasmonic crystals and biomimicking anisotropic surfaces. This integrated INIL-based strategy shows great promise for 3D nanofabrication in the fields of photonics, plasmonics and surface tribology.

  3. Research on construction of Web 3D-GIS based on Skyline

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Gao, Zhiqiang; Ning, Jicai

    2014-10-01

    This paper further studies the construction, publishing and display of three-dimensional (3D) scenes and their implementation based on Skyline family of software, combining remote sensing images and DEM data. Among them, the SketchUp software is used to build landscape models and the JavaScript programming language is adopted to achieve web browsing of 3D scenes. The study provides a useful exploration for the establishment of Web 3D-GIS combining Web GIS technology and 3D visualization technology.

  4. Memory usage reduction and intensity modulation for 3D holographic display using non-uniformly sampled computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Zhang, Zhao; Liu, Juan; Jia, Jia; Li, Xin; Pan, Yijie; Han, Jian; Hu, Bin; Wang, Yongtian

    2013-12-01

    The real-time holographic display encounters heavy computational load of computer-generated holograms and precisely intensity modulation of 3D images reconstructed by phase-only holograms. In this study, we demonstrate a method for reducing memory usage and modulating the intensity in 3D holographic display. The proposed method can eliminate the redundant information of holograms by employing the non-uniform sampling technique. By combining with the novel look-up table method, 70% reduction in the storage amount can be reached. The gray-scale modulation of 3D images reconstructed by phase-only holograms can be extended either. We perform both numerical simulations and optical experiments to verify the practicability of this method, and the results match well with each other. It is believed that the proposed method can be used in 3D dynamic holographic display and design of the diffractive phase elements.

  5. 3D ladar ATR based on recognition by parts

    NASA Astrophysics Data System (ADS)

    Sobel, Erik; Douglas, Joel; Ettinger, Gil

    2003-09-01

    LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.

  6. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  7. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  8. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  9. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  10. Precise Animated 3-D Displays Of The Heart Constructed From X-Ray Scatter Fields

    NASA Astrophysics Data System (ADS)

    McInerney, J. J.; Herr, M. D.; Copenhaver, G. L.

    1986-01-01

    A technique, based upon the interrogation of x-ray scatter, has been used to construct precise animated displays of the three-dimensional surface of the heart throughout the cardiac cycle. With the selection of motion amplification, viewing orientation, beat rate, and repetitive playbacks of isolated segments of the cardiac cycle, these displays are used to directly visualize epicardial surface velocity and displacement patterns, to construct regional maps of old or new myocardial infarction, and to visualize diastolic stiffening of the ventricle associated with acute ischemia. The procedure is non-invasive. Cut-downs or injections are not required.

  11. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  12. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    NASA Astrophysics Data System (ADS)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  13. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7. PMID:26907057

  14. Development of a stereoscopic 3D display system to observe restored heritage

    NASA Astrophysics Data System (ADS)

    Morikawa, Hiroyuki; Kawaguchi, Mami; Kawai, Takashi; Ohya, Jun

    2004-05-01

    The authors have developed a binocular-type display system that allows digital archives of cultural assets to be viewed in their actual environment. The system is designed for installation in locations where such cultural assets were originally present. The viewer sees buildings and other heritage items as they existed historically by looking through the binoculars. Images of the cultural assets are reproduced by stereoscopic 3D CG in cyberspace, and the images are superimposed on actual images in real-time. This system consists of stereoscopic CCD cameras that capture a stereo view of the landscape and LCDs for presentation to the viewer. Virtual cameras, used to render CG images from digital archives, move in synchrony with the actual cameras, so the relative position of the CG images and the landscape on which they are superimposed is always fixed. The system has manual controls for digital zoom. Furthermore, the transparency of the CG images can be altered by the viewer. As a case study for the effectiveness of this system, the authors chose the Heijyoukyou ruins in Nara, Japan. The authors evaluate the sense of immersion, stereoscopic effect, and usability of the system.

  15. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    NASA Astrophysics Data System (ADS)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  16. Arctic Research Mapping Application 3D Geobrowser: Accessing and Displaying Arctic Information From the Desktop to the Web

    NASA Astrophysics Data System (ADS)

    Johnson, G. W.; Gonzalez, J.; Brady, J. J.; Gaylord, A.; Manley, W. F.; Cody, R.; Dover, M.; Score, R.; Garcia-Lavigne, D.; Tweedie, C. E.

    2009-12-01

    ARMAP 3D allows users to dynamically interact with information about U.S. federally funded research projects in the Arctic. This virtual globe allows users to explore data maintained in the Arctic Research & Logistics Support System (ARLSS) database providing a very valuable visual tool for science management and logistical planning, ascertaining who is doing what type of research and where. Users can “fly to” study sites, view receding glaciers in 3D and access linked reports about specific projects. Custom “Search” tasks have been developed to query by researcher name, discipline, funding program, place names and year and display results on the globe with links to detailed reports. ARMAP 3D was created with ESRI’s free ArcGIS Explorer (AGX) new build 900 providing an updated application from build 500. AGX applications provide users the ability to integrate their own spatial data on various data layers provided by ArcOnline (http://resources.esri.com/arcgisonlineservices). Users can add many types of data including OGC web services without any special data translators or costly software. ARMAP 3D is part of the ARMAP suite (http://armap.org), a collection of applications that support Arctic science tools for users of various levels of technical ability to explore information about field-based research in the Arctic. ARMAP is funded by the National Science Foundation Office of Polar Programs Arctic Sciences Division and is a collaborative development effort between the Systems Ecology Lab at the University of Texas at El Paso, Nuna Technologies, the INSTAAR QGIS Laboratory, and CH2M HILL Polar Services.

  17. Recognition technology research based on 3D fingerprint

    NASA Astrophysics Data System (ADS)

    Tian, Qianxiao; Huang, Shujun; Zhang, Zonghua

    2014-11-01

    Fingerprint has been widely studied and applied to personal recognition in both forensics and civilian. However, the current widespread used fingerprint is identified by 2D (two-dimensional) fingerprint image and the mapping from 3D (three-dimensional) to 2D loses 1D information, which leads to low accurate and even wrong recognition. This paper presents a 3D fingerprint recognition method based on the fringe projection technique. A series of fringe patterns generated by software are projected onto a finger surface through a projecting system. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. The deformed fringe pattern images give the 3D shape data of the finger and the 3D fingerprint features. Through converting the 3D fingerprints to 2D space, traditional 2D fingerprint recognition method can be used to 3D fingerprints recognition. Experimental results on measuring and recognizing some 3D fingerprints show the accuracy and availability of the developed 3D fingerprint system.

  18. 3-D target-based distributed smart camera network localization.

    PubMed

    Kassebaum, John; Bulusu, Nirupama; Feng, Wu-Chi

    2010-10-01

    For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1 ('') when the 3-D target's feature points fill only 2.9% of the frame area. PMID:20679031

  19. 3D reconstruction based on CT image and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxun; Zhang, Mingmin

    2004-03-01

    Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.

  20. Software-based geometry operations for 3D computer graphics

    NASA Astrophysics Data System (ADS)

    Sima, Mihai; Iancu, Daniel; Glossner, John; Schulte, Michael; Mamidi, Suman

    2006-02-01

    In order to support a broad dynamic range and a high degree of precision, many of 3D renderings fundamental algorithms have been traditionally performed in floating-point. However, fixed-point data representation is preferable over floating-point representation in graphics applications on embedded devices where performance is of paramount importance, while the dynamic range and precision requirements are limited due to the small display sizes (current PDA's are 640 × 480 (VGA), while cell-phones are even smaller). In this paper we analyze the efficiency of a CORDIC-augmented Sandbridge processor when implementing a vertex processor in software using fixed-point arithmetic. A CORDIC-based solution for vertex processing exhibits a number of advantages over classical Multiply-and-Acumulate solutions. First, since a single primitive is used to describe the computation, the code can easily be vectorized and multithreaded, and thus fits the major Sandbridge architectural features. Second, since a CORDIC iteration consists of only a shift operation followed by an addition, the computation may be deeply pipelined. Initially, we outline the Sandbridge architecture extension which encompasses a CORDIC functional unit and the associated instructions. Then, we consider rigid-body rotation, lighting, exponentiation, vector normalization, and perspective division (which are some of the most important data-intensive 3D graphics kernels) and propose a scheme to implement them on the CORDIC-augmented Sandbridge processor. Preliminary results indicate that the performance improvement within the extended instruction set ranges from 3× to 10× (with the exception of rigid body rotation).

  1. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  2. Powder-based 3D printing for bone tissue engineering.

    PubMed

    Brunello, G; Sivolella, S; Meneghello, R; Ferroni, L; Gardin, C; Piattelli, A; Zavan, B; Bressan, E

    2016-01-01

    Bone tissue engineered 3-D constructs customized to patient-specific needs are emerging as attractive biomimetic scaffolds to enhance bone cell and tissue growth and differentiation. The article outlines the features of the most common additive manufacturing technologies (3D printing, stereolithography, fused deposition modeling, and selective laser sintering) used to fabricate bone tissue engineering scaffolds. It concentrates, in particular, on the current state of knowledge concerning powder-based 3D printing, including a description of the properties of powders and binder solutions, the critical phases of scaffold manufacturing, and its applications in bone tissue engineering. Clinical aspects and future applications are also discussed. PMID:27086202

  3. A Primitive-Based 3D Object Recognition System

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  4. The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.

  5. 3D face recognition based on a modified ICP method

    NASA Astrophysics Data System (ADS)

    Zhao, Kankan; Xi, Jiangtao; Yu, Yanguang; Chicharo, Joe F.

    2011-11-01

    3D face recognition technique has gained much more attention recently, and it is widely used in security system, identification system, and access control system, etc. The core technique in 3D face recognition is to find out the corresponding points in different 3D face images. The classic partial Iterative Closest Point (ICP) method is iteratively align the two point sets based on repetitively calculate the closest points as the corresponding points in each iteration. After several iterations, the corresponding points can be obtained accurately. However, if two 3D face images with different scale are from the same person, the classic partial ICP does not work. In this paper we propose a modified partial Iterative Closest Point (ICP) method in which the scaling effect is considered to achieve 3D face recognition. We design a 3x3 diagonal matrix as the scale matrix in each iteration of the classic partial ICP. The probing face image which is multiplied by the scale matrix will keep the similar scale with the reference face image. Therefore, we can accurately determine the corresponding points even the scales of probing image and reference image are different. 3D face images in our experiments are acquired by a 3D data acquisition system based on Digital Fringe Projection Profilometry (DFPP). A 3D database consists of 30 group images, three images with the same scale, which are from the same person with different views, are included in each group. And in different groups, the scale of the 3 images may be different from other groups. The experiment results show that our proposed method can achieve 3D face recognition, especially in the case that the scales of probing image and referent image are different.

  6. Progress in off-plane computer-generated waveguide holography for near-to-eye 3D display

    NASA Astrophysics Data System (ADS)

    Jolly, Sundeep; Savidis, Nickolaos; Datta, Bianca; Bove, V. Michael; Smalley, Daniel

    2016-03-01

    Waveguide holography refers to the use of holographic techniques for the control of guided-wave light in integrated optical devices (e.g., off-plane grating couplers and in-plane distributed Bragg gratings for guided-wave optical filtering). Off-plane computer-generated waveguide holography (CGWH) has also been employed in the generation of simple field distributions for image display. We have previously depicted the design and fabrication of a binary-phase CGWH operating in the Raman-Nath regime for the purposes of near-to-eye 3-D display and as a precursor to a dynamic, transparent flat-panel guided-wave holographic video display. In this paper, we describe design algorithms and fabrication techniques for multilevel phase CGWHs for near-to-eye 3-D display.

  7. Steering knuckle diameter measurement based on optical 3D scanning

    NASA Astrophysics Data System (ADS)

    Song, Li-mei; Li, Da-peng; Chang, Yu-lan; Xi, Jiang-tao; Guo, Qing-hua

    2014-11-01

    To achieve accurate measurements, the creating a fitting hole for internal diameter (CFHID) measurement method and the establishing multi-sectional curve for external diameter (EMCED) measurement method are proposed in this paper, which are based on computer vision principle and three-dimensional (3D) reconstruction. The methods are able to highlight the 3D characteristics of the scanned object and to achieve the accurate measurement of 3D data. It can create favorable conditions for realizing the reverse design and 3D reconstruction of scanned object. These methods can also be applied to dangerous work environment or the occasion that traditional contact measurement can not meet the demands, and they can improve the security in measurement.

  8. Enhanced perception of terrain hazards in off-road path choice: stereoscopic 3D versus 2D displays

    NASA Astrophysics Data System (ADS)

    Merritt, John O.; CuQlock-Knopp, V. Grayson; Myles, Kimberly

    1997-06-01

    Off-road mobility at night is a critical factor in modern military operations. Soldiers traversing off-road terrain, both on foot and in combat vehicles, often use 2D viewing devices (such as a driver's thermal viewer, or biocular or monocular night-vision goggles) for tactical mobility under low-light conditions. Perceptual errors can occur when 2D displays fail to convey adequately the contours of terrain. Some off-road driving accidents have been attributed to inadequate perception of terrain features due to using 2D displays (which do not provide binocular-parallax cues to depth perception). In this study, photographic images of terrain scenes were presented first in conventional 2D video, and then in stereoscopic 3D video. The percentage of possible correct answers for 2D and 3D were: 2D pretest equals 52%, 3D pretest equals 80%, 2D posttest equals 48%, 3D posttest equals 78%. Other recent studies conducted at the US Army Research Laboratory's Human Research and Engineering Directorate also show that stereoscopic 3D displays can significantly improve visual evaluation of terrain features, and thus may improve the safety and effectiveness of military off-road mobility operation, both on foot and in combat vehicles.

  9. A modular cross-platform GPU-based approach for flexible 3D video playback

    NASA Astrophysics Data System (ADS)

    Olsson, Roger; Andersson, Håkan; Sjöström, Mårten

    2011-03-01

    Different compression formats for stereo and multiview based 3D video is being standardized and software players capable of decoding and presenting these formats onto different display types is a vital part in the commercialization and evolution of 3D video. However, the number of publicly available software video players capable of decoding and playing multiview 3D video is still quite limited. This paper describes the design and implementation of a GPU-based real-time 3D video playback solution, built on top of cross-platform, open source libraries for video decoding and hardware accelerated graphics. A software architecture is presented that efficiently process and presents high definition 3D video in real-time and in a flexible manner support both current 3D video formats and emerging standards. Moreover, a set of bottlenecks in the processing of 3D video content in a GPU-based real-time 3D video playback solution is identified and discussed.

  10. Feature-Based Quality Evaluation of 3d Point Clouds - Study of the Performance of 3d Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Ridene, T.; Goulette, F.; Chendeb, S.

    2013-08-01

    The production of realistic 3D map databases is continuously growing. We studied an approach of 3D mapping database producing based on the fusion of heterogeneous 3D data. In this term, a rigid registration process was performed. Before starting the modeling process, we need to validate the quality of the registration results, and this is one of the most difficult and open research problems. In this paper, we suggest a new method of evaluation of 3D point clouds based on feature extraction and comparison with a 2D reference model. This method is based on tow metrics: binary and fuzzy.

  11. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  12. Determination of the optimum viewing distance for a multi-view auto-stereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Park, Inkyu; Kim, Sung-Kyu

    2014-09-22

    We present methodologies for determining the optimum viewing distance (OVD) for a multi-view auto-stereoscopic 3D display system with a parallax barrier. The OVD can be efficiently determined as the viewing distance where statistical deviation of centers of quasi-linear distributions of illuminance at central viewing zones is minimized using local areas of a display panel. This method can offer reduced computation time because it does not use the entire area of the display panel during a simulation, but still secures considerable accuracy. The method is verified in experiments, showing its applicability for efficient optical characterization. PMID:25321731

  13. Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays.

    PubMed

    Takaki, Yasuhiro; Urano, Yohei; Nishio, Hiroyuki

    2012-11-19

    The discontinuity of motion parallax offered by multi-view displays was assessed by subjective evaluation. A super multi-view head-up display, which provides dense viewing points and has short-, medium-, and long-distance display ranges, was used. The results showed that discontinuity perception depended on the ratio of an image shift between adjacent parallax images to a pixel pitch of three-dimensional (3D) images and the crosstalk between viewing points. When the ratio was less than 0.2 and the crosstalk was small, the discontinuity was not perceived. When the ratio was greater than 1 and the crosstalk was small, the discontinuity was perceived, and the resolution of the 3D images decreased twice. When the crosstalk was large, the discontinuity was not perceived even when the ratio was 1 or 2. However, the resolution decreased two or more times. PMID:23187574

  14. SIFT algorithm-based 3D pose estimation of femur.

    PubMed

    Zhang, Xuehe; Zhu, Yanhe; Li, Changle; Zhao, Jie; Li, Ge

    2014-01-01

    To address the lack of 3D space information in the digital radiography of a patient femur, a pose estimation method based on 2D-3D rigid registration is proposed in this study. The method uses two digital radiography images to realize the preoperative 3D visualization of a fractured femur. Compared with the pure Digital Radiography or Computed Tomography imaging diagnostic methods, the proposed method has the advantages of low cost, high precision, and minimal harmful radiation. First, stable matching point pairs in the frontal and lateral images of the patient femur and the universal femur are obtained by using the Scale Invariant Feature Transform method. Then, the 3D pose estimation registration parameters of the femur are calculated by using the Iterative Closest Point (ICP) algorithm. Finally, based on the deviation between the six degrees freedom parameter calculated by the proposed method, preset posture parameters are calculated to evaluate registration accuracy. After registration, the rotation error is less than l.5°, and the translation error is less than 1.2 mm, which indicate that the proposed method has high precision and robustness. The proposed method provides 3D image information for effective preoperative orthopedic diagnosis and surgery planning. PMID:25226990

  15. Evaluating Biomaterial- and Microfluidic-Based 3D Tumor Models.

    PubMed

    Carvalho, Mariana R; Lima, Daniela; Reis, Rui L; Correlo, Vitor M; Oliveira, Joaquim M

    2015-11-01

    Cancer is a major cause of morbidity and mortality worldwide, with a disease burden estimated to increase over the coming decades. Disease heterogeneity and limited information on cancer biology and disease mechanisms are aspects that 2D cell cultures fail to address. Here, we review the current ‘state-of-the-art’ in 3D tissue-engineering (TE) models developed for, and used in, cancer research. We assess the potential for scaffold-based TE models and microfluidics to fill the gap between 2D models and clinical application. We also discuss recent advances in combining the principles of 3D TE models and microfluidics, with a special focus on biomaterials and the most promising chip-based 3D models. PMID:26603572

  16. A series of new lanthanide fumarates displaying three types of 3-D frameworks.

    PubMed

    Tan, Xiao-Feng; Zhou, Jian; Fu, Lianshe; Xiao, Hong-Ping; Zou, Hua-Hong; Tang, Qiuling

    2016-03-28

    A series of lanthanide fumarates [Sm2(fum)3(H2fum)(H2O)2] (1, H2fum = fumaric acid), [Ln2(fum)3-(H2O)4]·3H2O {Ln = Tb (2a), Dy (2b)} and [Ln2(fum)3(H2O)4] {Ln = Y (3a), Ho (3b), Er (3c), Tm (3d)} were prepared by the hydrothermal method and their structures were classified into three types. The 3-D framework of compound 1 contains a 1-D infinite [Sm-O-Sm]n chain built up from the connection of SmO8(H2O) polyhedra sharing edges via three -COO group bridges of fumarate ligands, which is further constructed into a 3-D network structure with three kinds of fumarate ligands. Compounds 2a-b are isostructural and consist of a 3-D porous framework with 0-D cavities for the accommodation of chair-like hexameric (H2O)6 clusters. Compounds 3a-d are isostructural and have a 3-D network structure remarkably different from those of 1 and 2a-b, due to the different coordination numbers for the Ln(3+) ions and distinct fumarate ligand bridging patterns. A systematic investigation of seven lanthanide fumarates and five reported compounds revealed that the well-known lanthanide contraction has a significant influence on the formation of lanthanide fumarates. The magnetic properties of compounds 1, 2b and 3b-3d were also investigated. PMID:26894939

  17. Server-based approach to web visualization of integrated 3-D medical image data.

    PubMed Central

    Poliakov, A. V.; Albright, E.; Corina, D.; Ojemann, G.; Martin, R. F.; Brinkley, J. F.

    2001-01-01

    Although computer processing power and network bandwidth are rapidly increasing, the average desktop is still not able to rapidly process large datasets such as 3-D medical image volumes. We have therefore developed a server side approach to this problem, in which a high performance graphics server accepts commands from web clients to load, process and render 3-D image volumes and models. The renderings are saved as 2-D snapshots on the server, where they are uploaded and displayed on the client. User interactions with the graphic interface on the client side are translated into additional commands to manipulate the 3-D scene, after which the server re-renders the scene and sends a new image to the client. Example forms-based and Java-based clients are described for a brain mapping application, but the techniques should be applicable to multiple domains where 3-D medical image visualization is of interest. PMID:11825248

  18. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  19. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  20. Shape control in wafer-based aperiodic 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Jeong, Hyeon-Ho; Mark, Andrew G.; Gibbs, John G.; Reindl, Thomas; Waizmann, Ulrike; Weis, Jürgen; Fischer, Peer

    2014-06-01

    Controlled local fabrication of three-dimensional (3D) nanostructures is important to explore and enhance the function of single nanodevices, but is experimentally challenging. We present a scheme based on e-beam lithography (EBL) written seeds, and glancing angle deposition (GLAD) grown structures to create nanoscale objects with defined shapes but in aperiodic arrangements. By using a continuous sacrificial corral surrounding the features of interest we grow isolated 3D nanostructures that have complex cross-sections and sidewall morphology that are surrounded by zones of clean substrate.

  1. GPU-based 3D lower tree wavelet video encoder

    NASA Astrophysics Data System (ADS)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Drummond, Leroy Anthony; Migallón, Hector

    2013-12-01

    The 3D-DWT is a mathematical tool of increasing importance in those applications that require an efficient processing of huge amounts of volumetric info. Other applications like professional video editing, video surveillance applications, multi-spectral satellite imaging, HQ video delivery, etc, would rather use 3D-DWT encoders to reconstruct a frame as fast as possible. In this article, we introduce a fast GPU-based encoder which uses 3D-DWT transform and lower trees. Also, we present an exhaustive analysis of the use of GPU memory. Our proposal shows good trade off between R/D, coding delay (as fast as MPEG-2 for High definition) and memory requirements (up to 6 times less memory than x264).

  2. Robust model-based 3d/3D fusion using sparse matching for minimally invasive surgery.

    PubMed

    Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan

    2013-01-01

    Classical surgery is being disrupted by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm CT and C-arm fluoroscopy are routinely used for intra-operative guidance. However, intra-operative modalities have limited image quality of the soft tissue and a reliable assessment of the cardiac anatomy can only be made by injecting contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a novel sparse matching approach for fusing high quality pre-operative CT and non-contrasted, non-gated intra-operative C-arm CT by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the pre-operative CT and mapped to the intra-operative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments demonstrate that our model-based fusion approach has an average execution time of 2.9 s, while the accuracy lies within expert user confidence intervals. PMID:24505663

  3. Perception-based shape retrieval for 3D building models

    NASA Astrophysics Data System (ADS)

    Zhang, Man; Zhang, Liqiang; Takis Mathiopoulos, P.; Ding, Yusi; Wang, Hao

    2013-01-01

    With the help of 3D search engines, a large number of 3D building models can be retrieved freely online. A serious disadvantage of most rotation-insensitive shape descriptors is their inability to distinguish between two 3D building models which are different at their main axes, but appear similar when one of them is rotated. To resolve this problem, we present a novel upright-based normalization method which not only correctly rotates such building models, but also greatly simplifies and accelerates the abstraction and the matching of building models' shape descriptors. Moreover, the abundance of architectural styles significantly hinders the effective shape retrieval of building models. Our research has shown that buildings with different designs are not well distinguished by the widely recognized shape descriptors for general 3D models. Motivated by this observation and to further improve the shape retrieval quality, a new building matching method is introduced and analyzed based on concepts found in the field of perception theory and the well-known Light Field descriptor. The resulting normalized building models are first classified using the qualitative shape descriptors of Shell and Unevenness which outline integral geometrical and topological information. These models are then put in on orderly fashion with the help of an improved quantitative shape descriptor which we will term as Horizontal Light Field Descriptor, since it assembles detailed shape characteristics. To accurately evaluate the proposed methodology, an enlarged building shape database which extends previous well-known shape benchmarks was implemented as well as a model retrieval system supporting inputs from 2D sketches and 3D models. Various experimental performance evaluation results have shown that, as compared to previous methods, retrievals employing the proposed matching methodology are faster and more consistent with human recognition of spatial objects. In addition these performance

  4. Macroscopic Carbon Nanotube-based 3D Monoliths.

    PubMed

    Du, Ran; Zhao, Qiuchen; Zhang, Na; Zhang, Jin

    2015-07-15

    Carbon nanotubes (CNTs) are one of the most promising carbon allotropes with incredible diverse physicochemical properties, thereby enjoying continuous worldwide attention since their discovery about two decades ago. From the point of view of practical applications, assembling individual CNTs into macroscopic functional and high-performance materials is of paramount importance. For example, multiscaled CNT-based assemblies including 1D fibers, 2D films, and 3D monoliths have been developed. Among all of these, monolithic 3D CNT architectures with porous structures have attracted increasing interest in the last few years. In this form, theoretically all individual CNTs are well connected and fully expose their surfaces. These 3D architectures have huge specific surface areas, hierarchical pores, and interconnected conductive networks, resulting in enhanced mass/electron transport and countless accessible active sites for diverse applications (e.g. catalysis, capacitors, and sorption). More importantly, the monolithic form of 3D CNT assemblies can impart additional application potentials to materials, such as free-standing electrodes, sensors, and recyclable sorbents. However, scaling the properties of individual CNTs to 3D assemblies, improving use of the diverse, structure-dependent properties of CNTs, and increasing the performance-to-cost ratio are great unsolved challenges for their real commercialization. This review aims to provide a comprehensive introduction of this young and energetic field, i.e., CNT-based 3D monoliths, with a focus on the preparation principles, current synthetic methods, and typical applications. Opportunities and challenges in this field are also presented. PMID:25740457

  5. 3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation.

    PubMed

    Yeom, Han-Ju; Kim, Hee-Jae; Kim, Seong-Bok; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo; Park, Jae-Hyeung

    2015-12-14

    We propose a bar-type three-dimensional holographic head mounted display using two holographic optical elements. Conventional stereoscopic head mounted displays may suffer from eye fatigue because the images presented to each eye are two-dimensional ones, which causes mismatch between the accommodation and vergence responses of the eye. The proposed holographic head mounted display delivers three-dimensional holographic images to each eye, removing the eye fatigue problem. In this paper, we discuss the configuration of the bar-type waveguide head mounted displays and analyze the aberration caused by the non-symmetric diffraction angle of the holographic optical elements which are used as input and output couplers. Pre-distortion of the hologram is also proposed in the paper to compensate the aberration. The experimental results show that proposed head mounted display can present three-dimensional see-through holographic images to each eye with correct focus cues. PMID:26698993

  6. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  7. Holographic display of real existing objects from their 3D Fourier spectrum

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Sando, Yusuke

    2005-02-01

    A method for synthesizing computer-generated holograms of real-existing objects is described. A series of projection images are recorded both vertically and horizontally with an incoherent light source and a color CCD camera. According to the principle of computer tomography(CT), the 3-D Fourier spectrum is calculated from several projection images of objects and the Fresnel computer-generated hologram(CGH) is synthesized using a part of the 3-D Fourier spectrum. This method has following advantages. At first, no-blur reconstructed images in any direction are obtained owing to two-dimensionally scanning in recording. Secondarily, since not interference fringes but simple projection images of objects are recorded, a coherent light source is not necessary for recording. The use of a color CCD in recording enables us to record and reconstruct colorful objects. Finally, we demonstrate color reconstruction of objects both numerically and optically.

  8. 3D web based learning of medical equipment employed in intensive care units.

    PubMed

    Cetin, Aydın

    2012-02-01

    In this paper, both synchronous and asynchronous web based learning of 3D medical equipment models used in hospital intensive care unit have been described over the moodle course management system. 3D medical equipment models were designed with 3ds Max 2008, then converted to ASE format and added interactivity displayed with Viewpoint-Enliven. 3D models embedded in a web page in html format with dynamic interactivity-rotating, panning and zooming by dragging a mouse over images-and descriptive information is embedded to 3D model by using xml format. A pilot test course having 15 h was applied to technicians who is responsible for intensive care unit at Medical Devices Repairing and Maintenance Center (TABOM) of Turkish High Specialized Hospital. PMID:20703738

  9. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  10. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  11. Flight tests of advanced 3D-PFD with commercial flat-panel avionics displays and EGPWS system

    NASA Astrophysics Data System (ADS)

    He, Gang; Feyereisen, Thea; Gannon, Aaron; Wilson, Blake; Schmitt, John; Wyatt, Sandy; Engels, Jary

    2005-05-01

    This paper describes flight trials of Honeywell Advanced 3D Primary Flight Display System. The system employs a large-format flat-panel avionics display presently used in Honeywell PRIMUS EPIC flight-deck products and is coupled to an on-board EGPWS system. The heads-down primary flight display consists of dynamic primary-flight attitude information, flight-path and approach symbology similar to Honeywell HUD2020 heads-up displays, and a synthetic 3D perspective-view terrain environment generated with Honeywell"s EGPWS terrain data. Numerous flights are conducted on-board Honeywell Citation V aircraft and significant amount of pilot feedback are collected with portion of the data summarized in this paper. The system development is aimed at leveraging several well-established avionics components (HUD, EGPWS, large-format displays) in order to produce an integrated system that significantly reduces pilot workload, increases overall situation awareness, and is more beneficial to flight operations than achievable with separated systems.

  12. Effective declutter of complex flight displays using stereoptic 3-D cueing

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.; Nold, Dean E.

    1994-01-01

    The application of stereo technology to new, integrated pictorial display formats has been effective in situational awareness enhancements, and stereo has been postulated to be effective for the declutter of complex informational displays. This paper reports a full-factorial workstation experiment performed to verify the potential benefits of stereo cueing for the declutter function in a simulated tracking task. The experimental symbology was designed similar to that of a conventional flight director, although the format was an intentionally confused presentation that resulted in a very cluttered dynamic display. The subject's task was to use a hand controller to keep a tracking symbol, an 'X', on top of a target symbol, another X, which was being randomly driven. In the basic tracking task, both the target symbol and the tracking symbol were presented as red X's. The presence of color coding was used to provide some declutter, thus making the task more reasonable to perform. For this condition, the target symbol was coded red, and the tracking symbol was coded blue. Noise conditions, or additional clutter, were provided by the inclusion of randomly moving, differently colored X symbols. Stereo depth, which was hypothesized to declutter the display, was utilized by placing any noise in a plane in front of the display monitor, the tracking symbol at screen depth, and the target symbol behind the screen. The results from analyzing the performances of eight subjects revealed that the stereo presentation effectively offsets the cluttering effects of both the noise and the absence of color coding. The potential of stereo cueing to declutter complex informational displays has therefore been verified; this ability to declutter is an additional benefit from the application of stereoptic cueing to pictorial flight displays.

  13. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  14. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  15. Vertically dispersive holographic screens and autostereoscopic displays in 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Magalhães, Lucas V. B.; Llovera, Juan J.; Li, Li M.

    2011-05-01

    In this work we describe a setup employed for the recording of vertical dispersive holographic screens that can be used for medical applications. We show how to obtain holographic screens with areas up to 1200 cm2, focal length of 25+/-2 cm and diffraction efficiency of 7.2%. We analyze the technique employed and the holographic screens obtained. Using this screen we describe a setup for the projection of Magnetic Resonance or Tomographic Images. We also describe and present the first results of an autostereoscopic system for 3D medical imaging.

  16. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  17. Structural description and combined 3D display for superior analysis of cerebral vascularity from MRA

    NASA Astrophysics Data System (ADS)

    Szekely, Gabor; Koller, Thomas; Kikinis, Ron; Gerig, Guido

    1994-09-01

    Medical image analysis has to support the clinicians ability to identify, manipulate and quantify anatomical structures. On scalar 2D image data, a human observer is often superior to computer assisted analysis, but the interpretation of vector- valued data or data combined from different modalities, especially in 3D, can benefit from computer assistance. The problem of how to convey the complex information to the clinician is often tackled by providing colored multimodality renderings. We propose to go a step beyond by supplying a suitable modelling of anatomical and functional structures encoding important shape features and physical properties. The multiple attributes regarding geometry, topology and function are carried by the symbolic description and can be interactively queried and edited. Integrated 3D rendering of object surfaces and symbolic representation acts as a visual interface to allow interactive communication between the observer and the complex data, providing new possibilities for quantification and therapy planning. The discussion is guided by the prototypical example of investigating the cerebral vasculature in MRA volume data. Geometric, topological and flow-related information can be assessed by interactive analysis on a computer workstation, providing otherwise hidden qualitative and quantitative information. Several case studies demonstrate the potential usage for structure identification, definition of landmarks, assessment of topology for catheterization, and local simulation of blood flow.

  18. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights. PMID:25099967

  19. Description of a 3D display with motion parallax and direct interaction

    NASA Astrophysics Data System (ADS)

    Tu, J.; Flynn, M. F.

    2014-03-01

    We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time.

  20. Interactive Cosmetic Makeup of a 3D Point-Based Face Model

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Sik; Choi, Soo-Mi

    We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.

  1. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  2. Parameters of the human 3D gaze while observing portable autostereoscopic display: a model and measurement results

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hanhela, Marianne; Gotchev, Atanas; Utirainen, Timo; Jumisko-Pyykkö, Satu; Hannuksela, Miska

    2012-02-01

    We present an approach to measure and model the parameters of human point-of-gaze (PoG) in 3D space. Our model considers the following three parameters: position of the gaze in 3D space, volume encompassed by the gaze and time for the gaze to arrive on the desired target. Extracting the 3D gaze position from binocular gaze data is hindered by three problems. The first problem is the lack of convergence - due to micro saccadic movements the optical lines of both eyes rarely intersect at a point in space. The second problem is resolution - the combination of short observation distance and limited comfort disparity zone typical for a mobile 3D display does not allow the depth of the gaze position to be reliably extracted. The third problem is measurement noise - due to the limited display size, the noise range is close to the range of properly measured data. We have developed a methodology which allows us to suppress most of the measurement noise. This allows us to estimate the typical time which is needed for the point-of-gaze to travel in x, y or z direction. We identify three temporal properties of the binocular PoG. The first is reaction time, which is the minimum time that the vision reacts to a stimulus position change, and is measured as the time between the event and the time the PoG leaves the proximity of the old stimulus position. The second is the travel time of the PoG between the old and new stimulus position. The third is the time-to-arrive, which is the time combining the reaction time, travel time, and the time required for the PoG to settle in the new position. We present the method for filtering the PoG outliers, for deriving the PoG center from binocular eye-tracking data and for calculating the gaze volume as a function of the distance between PoG and the observer. As an outcome from our experiments we present binocular heat maps aggregated over all observers who participated in a viewing test. We also show the mean values for all temporal

  3. Gesture Interaction Browser-Based 3D Molecular Viewer.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education. PMID:27350455

  4. 3D GIS spatial operation based on extended Euler operators

    NASA Astrophysics Data System (ADS)

    Xu, Hongbo; Lu, Guonian; Sheng, Yehua; Zhou, Liangchen; Guo, Fei; Shang, Zuoyan; Wang, Jing

    2008-10-01

    The implementation of 3 dimensions spatial operations, based on certain data structure, has a lack of universality and is not able to treat with non-manifold cases, at present. ISO/DIS 19107 standard just presents the definition of Boolean operators and set operators for topological relationship query, and OGC GeoXACML gives formal definitions for several set functions without implementation detail. Aiming at these problems, based mathematical foundation on cell complex theory, supported by non-manifold data structure and using relevant research in the field of non-manifold geometry modeling for reference, firstly, this paper according to non-manifold Euler-Poincaré formula constructs 6 extended Euler operators and inverse operators to carry out creating, updating and deleting 3D spatial elements, as well as several pairs of supplementary Euler operators to convenient for implementing advanced functions. Secondly, we change topological element operation sequence of Boolean operation and set operation as well as set functions defined in GeoXACML into combination of extended Euler operators, which separates the upper functions and lower data structure. Lastly, we develop underground 3D GIS prototype system, in which practicability and credibility of extended Euler operators faced to 3D GIS presented by this paper are validated.

  5. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  6. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  7. The Use Of Computerized Tomographic (CT) Scans For 3-D Display And Prosthesis Construction

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; Woodruff, Tracey J.; Beumer, John

    1985-06-01

    The construction of preformed cranial prostheses for large cranial bony defects is both error prone and time consuming. We discuss a method used for the creation of cranial prostheses from automatically extracted bone contours taken from Computerized Tomographic (CT) scans. Previous methods of prosthesis construction have relied on the making of a mold directly from the region of cranial defect. The use of image processing, bone contour extraction, and three-dimensional display allowed us to create a better fitting prosthesis while reducing patient surgery time. This procedure involves direct bone margin extraction from the digital CT images followed by head model construction from serial plots of the bone margin. Three-dimensional data display is used to verify the integrity of the skull data set prior to model construction. Once created, the model is used to fabricate a custom fitting prosthesis which is then surgically implanted. This procedure is being used with patients in the Maxillofacial Prosthetic Clinic at UCLA and this paper details the technique.

  8. High-power, red-emitting DBR-TPL for possible 3d holographic or volumetric displays

    NASA Astrophysics Data System (ADS)

    Feise, D.; Fiebig, C.; Blume, G.; Pohl, J.; Eppich, B.; Paschke, K.

    2013-03-01

    To create holographic or volumetric displays, it is highly desirable to move from conventional imaging projection displays, where the light is filtered from a constant source towards flying spot, where the correct amount of light is generated for every pixel. The only light sources available for such an approach, which requires visible, high output power with a spatial resolution beyond conventional lamps, are lasers. When adding the market demands for high electro-optical conversion efficiency, direct electrical modulation capability, compactness, reliability and massproduction compliance, this leaves only semiconductor diode lasers. We present red-emitting tapered diode lasers (TPL) emitting a powerful, visible, nearly diffraction limited beam (M²1/e² < 1.5) and a single longitudinal mode, which are well suited for 3d holographic and volumetric imaging. The TPLs achieved an optical output power in excess of 500 mW in the wavelength range between 633 nm and 638 nm. The simultaneous inclusion of a distributed Bragg reflector (DBR) surface grating provides wavelength selectivity and hence a spectral purity with a width Δλ < 5 pm. These properties allow dense spectral multiplexing to achieve output powers of several watts, which would be required for 3d volumetric display applications.

  9. A 3D Sensor Based on a Profilometrical Approach

    PubMed Central

    Pedraza-Ortega, Jesús Carlos; Gorrostieta-Hurtado, Efren; Delgado-Rosas, Manuel; Canchola-Magdaleno, Sandra L.; Ramos-Arreguin, Juan Manuel; Aceves Fernandez, Marco A.; Sotomayor-Olmedo, Artemio

    2009-01-01

    An improved method which considers the use of Fourier and wavelet transform based analysis to infer and extract 3D information from an object by fringe projection on it is presented. This method requires a single image which contains a sinusoidal white light fringe pattern projected on it, and this pattern has a known spatial frequency and its information is used to avoid any discontinuities in the fringes with high frequency. Several computer simulations and experiments have been carried out to verify the analysis. The comparison between numerical simulations and experiments has proved the validity of this proposed method. PMID:22303176

  10. 3D Chemical Similarity Networks for Structure-Based Target Prediction and Scaffold Hopping.

    PubMed

    Lo, Yu-Chen; Senese, Silvia; Damoiseaux, Robert; Torres, Jorge Z

    2016-08-19

    Target identification remains a major challenge for modern drug discovery programs aimed at understanding the molecular mechanisms of drugs. Computational target prediction approaches like 2D chemical similarity searches have been widely used but are limited to structures sharing high chemical similarity. Here, we present a new computational approach called chemical similarity network analysis pull-down 3D (CSNAP3D) that combines 3D chemical similarity metrics and network algorithms for structure-based drug target profiling, ligand deorphanization, and automated identification of scaffold hopping compounds. In conjunction with 2D chemical similarity fingerprints, CSNAP3D achieved a >95% success rate in correctly predicting the drug targets of 206 known drugs. Significant improvement in target prediction was observed for HIV reverse transcriptase (HIVRT) compounds, which consist of diverse scaffold hopping compounds targeting the nucleotidyltransferase binding site. CSNAP3D was further applied to a set of antimitotic compounds identified in a cell-based chemical screen and identified novel small molecules that share a pharmacophore with Taxol and display a Taxol-like mechanism of action, which were validated experimentally using in vitro microtubule polymerization assays and cell-based assays. PMID:27285961

  11. Fast vision-based catheter 3D reconstruction.

    PubMed

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D

    2016-07-21

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms. PMID:27352011

  12. Rule-based automatic segmentation for 3-D coronary arteriography

    NASA Astrophysics Data System (ADS)

    Sarwal, Alok; Truitt, Paul; Ozguner, Fusun; Zhang, Qian; Parker, Dennis L.

    1992-03-01

    Coronary arteriography is a technique used for evaluating the state of coronary arteries and assessing the need for bypass surgery and angioplasty. The present clinical application of this technology is based on the use of a contrast medium for manual radiographic visualization. This method is inaccurate due to varying interpretation of the visual results. Coronary arteriography based quantitations are impractical in a clinical setting without the use of automatic techniques applied to the 3-D reconstruction of the arterial tree. Such a system will provide an easily reproducible method for following the temporal changes in coronary morphology. The labeling of the arteries and establishing of the correspondence between multiple views is necessary for all subsequent processing required for 3-D reconstruction. This work represents a rule based expert system utilized for automatic labeling and segmentation of the arterial branches across multiple views. X-ray data of two and three views of human subjects and a pig arterial cast have been used for this research.

  13. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  14. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  15. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  16. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  17. EEG-based usability assessment of 3D shutter glasses

    NASA Astrophysics Data System (ADS)

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  18. Tri-color composite volume H-PDLC grating and its application to 3D color autostereoscopic display.

    PubMed

    Wang, Kangni; Zheng, Jihong; Gao, Hui; Lu, Feiyue; Sun, Lijia; Yin, Stuart; Zhuang, Songlin

    2015-11-30

    A tri-color composite volume holographic polymer dispersed liquid crystal (H-PDLC) grating and its application to 3-dimensional (3D) color autostereoscopic display are reported in this paper. The composite volume H-PDLC grating consists of three different period volume H-PDLC sub-gratings. The longer period diffracts red light, the medium period diffracts the green light, and the shorter period diffracts the blue light. To record three different period gratings simultaneously, two photoinitiators are employed. The first initiator consists of methylene blue and p-toluenesulfonic acid and the second initiator is composed of Rose Bengal and N-phenyglycine. In this case, the holographic recording medium is sensitive to entire visible wavelengths, including red, green, and blue so that the tri-color composite grating can be written simultaneously by harnessing three different color laser beams. In the experiment, the red beam comes from a He-Ne laser with an output wavelength of 632.8 nm, the green beam comes from a Verdi solid state laser with an output wavelength of 532 nm, and the blue beam comes from a He-Cd laser with an output wavelength of 441.6 nm. The experimental results show that diffraction efficiencies corresponding to red, green, and blue colors are 57%, 75% and 33%, respectively. Although this diffraction efficiency is not perfect, it is high enough to demonstrate the effect of 3D color autostereoscopic display. PMID:26698768

  19. Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis

    NASA Astrophysics Data System (ADS)

    Mah, J.; Claire, S.; Steve, M.

    2009-05-01

    Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint

  20. 3D reconstruction of rotational video microscope based on patches

    NASA Astrophysics Data System (ADS)

    Ma, Shijie; Qu, Yufu

    2015-11-01

    Due to the small field of view and shallow depth of field, the microscope could only capture 2D images of the object. In order to observe the three-dimensional structure of the micro object, a microscopy images reconstruction algorithm based on an improved patch-based multi-view stereo (PMVS) algorithm is proposed. The new algorithm improves PMVS from two aspects: first, increasing the propagation directions, second, on the basis of the expansion, different expansion radius and times are set by the angle between the normal vector of the seed patch and the direction vector of the line passing through the seed patch center and the camera center. Compared with PMVS, the number of 3D points made by the new algorithm is three times as much as PMVS. And the holes in the vertical side are also eliminated.

  1. Experimental observation of moiré angles in parallax barrier 3D displays.

    PubMed

    Saveljev, Vladimir; Kim, Sung-Kyu

    2014-07-14

    Angles of visible moiré patterns are observed experimentally. Experiments were made across the angular range 0 - 90° in a wide range of parameters. Two kinds of clusterization were observed, ray and discrete. In rational cells (LCD pixels), the moiré patterns appear at a few fixed discrete angles. The list of preferable moiré-less angles is presented basing on the experimental data; preferable areas in the parameter space are found. The problem of minimization of the moiré effect is formulated as the Diophantine inequality with complex coefficients. The classification of moiré angles basing on the probability of the moiré effect can be practically useful. PMID:25090529

  2. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  3. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    NASA Astrophysics Data System (ADS)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite

  4. 3D Healpix-based Skymaps Visualization using Java

    NASA Astrophysics Data System (ADS)

    Joliet, E.; O'Mullane, W.; Górski, K. M.; Banday, A. J.; Hivon, E.; Carr, R.

    2008-08-01

    HEALPix {http://healpix.jpl.nasa.gov/} is useful for data analysis and visualization. Gaia is the ESA space astrometry cornerstone mission the main objective of wich is to astrometrically and spectro-photometrically map 10^{9} celestial objects (mostly in our galaxy) with unprecedented accuracy. The data will be organized and stored in a central database at ESAC (Spain). The data treatment needs data analysis and visualization tools to accomplish a successful mission. The 3D Healpix-based skymaps are used as part of the interactive diagnostic tools as well as within the core processing. We present the HEALPix Java library and give some examples of its use within Gaia and Planck processing.

  5. Texture-Based Correspondence Display

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael

    2004-01-01

    Texture-based correspondence display is a methodology to display corresponding data elements in visual representations of complex multidimensional, multivariate data. Texture is utilized as a persistent medium to contain a visual representation model and as a means to create multiple renditions of data where color is used to identify correspondence. Corresponding data elements are displayed over a variety of visual metaphors in a normal rendering process without adding extraneous linking metadata creation and maintenance. The effectiveness of visual representation for understanding data is extended to the expression of the visual representation model in texture.

  6. Evaluation of regression-based 3-D shoulder rhythms.

    PubMed

    Xu, Xu; Dickerson, Clark R; Lin, Jia-Hua; McGorry, Raymond W

    2016-08-01

    The movements of the humerus, the clavicle, and the scapula are not completely independent. The coupled pattern of movement of these bones is called the shoulder rhythm. To date, multiple studies have focused on providing regression-based 3-D shoulder rhythms, in which the orientations of the clavicle and the scapula are estimated by the orientation of the humerus. In this study, six existing regression-based shoulder rhythms were evaluated by an independent dataset in terms of their predictability. The datasets include the measured orientations of the humerus, the clavicle, and the scapula of 14 participants over 118 different upper arm postures. The predicted orientations of the clavicle and the scapula were derived from applying those regression-based shoulder rhythms to the humerus orientation. The results indicated that none of those regression-based shoulder rhythms provides consistently more accurate results than the others. For all the joint angles and all the shoulder rhythms, the RMSE are all greater than 5°. Among those shoulder rhythms, the scapula lateral/medial rotation has the strongest correlation between the predicted and the measured angles, while the other thoracoclavicular and thoracoscapular bone orientation angles only showed a weak to moderate correlation. Since the regression-based shoulder rhythm has been adopted for shoulder biomechanical models to estimate shoulder muscle activities and structure loads, there needs to be further investigation on how the predicted error from the shoulder rhythm affects the output of the biomechanical model. PMID:26253991

  7. Hydrogel-based reinforcement of 3D bioprinted constructs.

    PubMed

    Melchels, Ferry P W; Blokzijl, Maarten M; Levato, Riccardo; Peiffer, Quentin C; Ruijter, Mylène de; Hennink, Wim E; Vermonden, Tina; Malda, Jos

    2016-01-01

    Progress within the field of biofabrication is hindered by a lack of suitable hydrogel formulations. Here, we present a novel approach based on a hybrid printing technique to create cellularized 3D printed constructs. The hybrid bioprinting strategy combines a reinforcing gel for mechanical support with a bioink to provide a cytocompatible environment. In comparison with thermoplastics such as [Formula: see text]-polycaprolactone, the hydrogel-based reinforcing gel platform enables printing at cell-friendly temperatures, targets the bioprinting of softer tissues and allows for improved control over degradation kinetics. We prepared amphiphilic macromonomers based on poloxamer that form hydrolysable, covalently cross-linked polymer networks. Dissolved at a concentration of 28.6%w/w in water, it functions as reinforcing gel, while a 5%w/w gelatin-methacryloyl based gel is utilized as bioink. This strategy allows for the creation of complex structures, where the bioink provides a cytocompatible environment for encapsulated cells. Cell viability of equine chondrocytes encapsulated within printed constructs remained largely unaffected by the printing process. The versatility of the system is further demonstrated by the ability to tune the stiffness of printed constructs between 138 and 263 kPa, as well as to tailor the degradation kinetics of the reinforcing gel from several weeks up to more than a year. PMID:27431861

  8. Generation and use of measurement-based 3-D dose distributions for 3-D dose calculation verification.

    PubMed

    Stern, R L; Fraass, B A; Gerhardsson, A; McShan, D L; Lam, K L

    1992-01-01

    A 3-D radiation therapy treatment planning system calculates dose to an entire volume of points and therefore requires a 3-D distribution of measured dose values for quality assurance and dose calculation verification. To measure such a volumetric distribution with a scanning ion chamber is prohibitively time consuming. A method is presented for the generation of a 3-D grid of dose values based on beam's-eye-view (BEV) film dosimetry. For each field configuration of interest, a set of BEV films at different depths is obtained and digitized, and the optical densities are converted to dose. To reduce inaccuracies associated with film measurement of megavoltage photon depth doses, doses on the different planes are normalized using an ion-chamber measurement of the depth dose. A 3-D grid of dose values is created by interpolation between BEV planes along divergent beam rays. This matrix of measurement-based dose values can then be compared to calculations over the entire volume of interest. This method is demonstrated for three different field configurations. Accuracy of the film-measured dose values is determined by 1-D and 2-D comparisons with ion chamber measurements. Film and ion chamber measurements agree within 2% in the central field regions and within 2.0 mm in the penumbral regions. PMID:1620042

  9. Energy harvesting “3-D knitted spacer” based piezoelectric textiles

    NASA Astrophysics Data System (ADS)

    Anand, S.; Soin, N.; Shah, T. H.; Siores, E.

    2016-07-01

    The piezoelectric effect in Poly(vinylidene fluoride), PVDF, was discovered over four decades ago and since then, significant work has been carried out aiming at the production of high p-phase fibres and their integration into fabric structures for energy harvesting. However, little work has been done in the area of production of “true piezoelectric fabric structures” based on flexible polymeric materials such as PVDF. In this work, we demonstrate “3-D knitted spacer” technology based all-fibre piezoelectric fabrics as power generators and energy harvesters. The knitted single-structure piezoelectric generator consists of high p-phase (~80%) piezoelectric PVDF monofilaments as the spacer yarn interconnected between silver (Ag) coated polyamide multifilament yarn layers acting as the top and bottom electrodes. The novel and unique textile structure provides an output power density in the range of 1.105.10 gWcm-2 at applied impact pressures in the range of 0.02-0.10 MPa, thus providing significantly higher power outputs and efficiencies over the existing 2-D woven and nonwoven piezoelectric structures. The high energy efficiency, mechanical durability and comfort of the soft, flexible and all-fibre based power generator is highly attractive for a variety of potential applications such as wearable electronic systems and energy harvesters charged from ambient environment or by human movement.

  10. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  11. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  12. Image-Based 3d Reconstruction and Analysis for Orthodontia

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  13. Gis-Based Smart Cartography Using 3d Modeling

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Tassetti, A. N.

    2013-08-01

    3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.

  14. Magnetically controllable 3D microtissues based on magnetic microcryogels.

    PubMed

    Liu, Wei; Li, Yaqian; Feng, Siyu; Ning, Jia; Wang, Jingyu; Gou, Maling; Chen, Huijun; Xu, Feng; Du, Yanan

    2014-08-01

    Microtissues on the scale of several hundred microns are a promising cell culture configuration resembling the functional tissue units in vivo. In contrast to conventional cell culture, handling of microtissues poses new challenges such as medium exchange, purification and maintenance of the microtissue integrity. Here, we developed magnetic microcryogels to assist microtissue formation with enhanced controllability and robustness. The magnetic microcryogels were fabricated on-chip by cryogelation and micro-molding which could endure extensive external forces such as fluidic shear stress during pipetting and syringe injection. The magnetically controllable microtissues were applied to constitute a novel separable 3D co-culture system realizing functional enhancement of the hepatic microtissues co-cultured with the stromal microtissues and easy purification of the hepatic microtissues for downstream drug testing. The magnetically controllable microtissues with pre-defined shapes were also applied as building blocks to accelerate the tissue assembly process under magnetic force for bottom-up tissue engineering. Finally, the magnetic microcryogels could be injected in vivo as cell delivery vehicles and tracked by MRI. The injectable magnetic microtissues maintained viability at the injection site indicating good retention and potential applications for cell therapy. The magnetic microcryogels are expected to significantly promote the microtissues as a promising cellular configuration for cell-based applications such as in drug testing, tissue engineering and regenerative therapy. PMID:24736804

  15. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  16. 3D affine registration using teaching-learning based optimization

    NASA Astrophysics Data System (ADS)

    Jani, Ashish; Savsani, Vimal; Pandya, Abhijit

    2013-09-01

    3D image registration is an emerging research field in the study of computer vision. In this paper, two effective global optimization methods are considered for the 3D registration of point clouds. Experiments were conducted by applying each algorithm and their performance was evaluated with respect to rigidity, similarity and affine transformations. Comparison of algorithms and its effectiveness was tested for the average performance to find the global solution for minimizing the error in the terms of distance between the model cloud and the data cloud. The parameters for the transformation matrix were considered as the design variables. Further comparisons of the considered methods were done for the computational effort, computational time and the convergence of the algorithm. The results reveal that the use of TLBO was outstanding for image processing application involving 3D registration. [Figure not available: see fulltext.

  17. Collaboration on Scene Graph Based 3D Data

    NASA Astrophysics Data System (ADS)

    Ammon, Lorenz; Bieri, Hanspeter

    Professional 3D digital content creation tools, like Alias Maya or discreet 3ds max, offer only limited support for a team of artists to work on a 3D model collaboratively. We present a scene graph repository system that enables fine-grained collaboration on scenes built using standard 3D DCC tools by applying the concept of collaborative versions to a general attributed scene graph. Artists can work on the same scene in parallel without locking out each other. The artists' changes to a scene are regularly merged to ensure that all artists can see each others progress and collaborate on current data. We introduce the concept of indirect changes and indirect conflicts to systematically inspect the effects that collaborative changes have on a scene. Inspecting indirect conflicts helps maintaining scene consistency by systematically looking for inconsistencies at the right places.

  18. Axisymmetric Implementation for 3D-Based DSMC Codes

    NASA Technical Reports Server (NTRS)

    Stewart, Benedicte; Lumpkin, F. E.; LeBeau, G. J.

    2011-01-01

    The primary objective in developing NASA s DSMC Analysis Code (DAC) was to provide a high fidelity modeling tool for 3D rarefied flows such as vacuum plume impingement and hypersonic re-entry flows [1]. The initial implementation has been expanded over time to offer other capabilities including a novel axisymmetric implementation. Because of the inherently 3D nature of DAC, this axisymmetric implementation uses a 3D Cartesian domain and 3D surfaces. Molecules are moved in all three dimensions but their movements are limited by physical walls to a small wedge centered on the plane of symmetry (Figure 1). Unfortunately, far from the axis of symmetry, the cell size in the direction perpendicular to the plane of symmetry (the Z-direction) may become large compared to the flow mean free path. This frequently results in inaccuracies in these regions of the domain. A new axisymmetric implementation is presented which aims to solve this issue by using Bird s approach for the molecular movement while preserving the 3D nature of the DAC software [2]. First, the computational domain is similar to that previously used such that a wedge must still be used to define the inflow surface and solid walls within the domain. As before molecules are created inside the inflow wedge triangles but they are now rotated back to the symmetry plane. During the move step, molecules are moved in 3D but instead of interacting with the wedge walls, the molecules are rotated back to the plane of symmetry at the end of the move step. This new implementation was tested for multiple flows over axisymmetric shapes, including a sphere, a cone, a double cone and a hollow cylinder. Comparisons to previous DSMC solutions and experiments, when available, are made.

  19. Microseismic network design assessment based on 3D ray tracing

    NASA Astrophysics Data System (ADS)

    Näsholm, Sven Peter; Wuestefeld, Andreas; Lubrano-Lavadera, Paul; Lang, Dominik; Kaschwich, Tina; Oye, Volker

    2016-04-01

    There is increasing demand on the versatility of microseismic monitoring networks. In early projects, being able to locate any triggers was considered a success. These early successes led to a better understanding of how to extract value from microseismic results. Today operators, regulators, and service providers work closely together in order to find the optimum network design to meet various requirements. In the current study we demonstrate an integrated and streamlined network capability assessment approach. It is intended for use during the microseismic network design process prior to installation. The assessments are derived from 3D ray tracing between a grid of event points and the sensors. Three aspects are discussed: 1) Magnitude of completeness or detection limit; 2) Event location accuracy; and 3) Ground-motion hazard. The network capability parameters 1) and 2) are estimated at all hypothetic event locations and are presented in the form of maps given a seismic sensor coordinate scenario. In addition, the ray tracing traveltimes permit to estimate the point-spread-functions (PSFs) at the event grid points. PSFs are useful in assessing the resolution and focusing capability of the network for stacking-based event location and imaging methods. We estimate the performance for a hypothetical network case with 11 sensors. We consider the well-documented region around the San Andreas Fault Observatory at Depth (SAFOD) located north of Parkfield, California. The ray tracing is done through a detailed velocity model which covers a 26.2 by 21.2 km wide area around the SAFOD drill site with a resolution of 200 m both for the P-and S-wave velocities. Systematic network capability assessment for different sensor site scenarios prior to installation facilitates finding a final design which meets the survey objectives.

  20. Overestimation of heights in virtual reality is influenced more by perceived distal size than by the 2-D versus 3-D dimensionality of the display

    NASA Technical Reports Server (NTRS)

    Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)

    2002-01-01

    One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.

  1. A new method to enlarge a range of continuously perceived depth in DFD (depth-fused 3D) display

    NASA Astrophysics Data System (ADS)

    Tsunakawa, Atsuhiro; Soumiya, Tomoki; Horikawa, Yuta; Yamamoto, Hirotsugu; Suyama, Shiro

    2013-03-01

    We can successfully solve the problem in DFD display that the maximum depth difference of front and rear planes is limited because depth fusing from front and rear images to one 3-D image becomes impossible. The range of continuously perceived depth was estimated as depth difference of front and rear planes increases. When the distance was large enough, perceived depth was near front plane at 0~40 % of rear luminance and near rear plane at 60~100 % of rear luminance. This maximum depth range can be successfully enlarged by spatial-frequency modulation of front and rear images. The change of perceived depth dependence was evaluated when high frequency component of front and rear images is cut off using Fourier Transformation at the distance between front and rear plane of 5 and 10 cm (4.9 and 9.4 minute of arc). When high frequency component does not cut off enough at the distance of 5 cm, perceived depth was separated to near front plane and near rear plane. However, when the images are blurred enough by cutting high frequency component, the perceived depth has a linear dependency on luminance ratio. When the images are not blurred at the distance of 10 cm, perceived depth is separated to near front plane at 0~30% of rear luminance, near rear plane at 80~100 % and near midpoint at 40~70 %. However, when the images are blurred enough, perceived depth successfully has a linear dependency on luminance ratio.

  2. Calibration Methods for a 3D Triangulation Based Camera

    NASA Astrophysics Data System (ADS)

    Schulz, Ulrike; Böhnke, Kay

    A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.

  3. Three-dimensional display based on refreshable volume holograms in photochromic diarylethene polymer

    NASA Astrophysics Data System (ADS)

    Cao, Liangcai; Wang, Zheng; Li, Chengmingyue; Li, Cunpu; Zhang, Fushi; Jin, Guofan

    2015-03-01

    Holographic display is a promising technique for three-dimensional (3D) display because it has the ability to reconstruct both the intensity and wavefront of a 3D object. Real-time holographic display has been demonstrated in photorefractive polymers. It is expected to carry out dynamic 3D display by recording holograms into a volume holographic polymer due to its high-density storage capacity, good multiplexing property. In this work an updatable 3D display based on volume holographic polymer of photochromic diarylethene is proposed. The photochromic diarylethene polymer is a promising rewritable recording material for holograms with high resolution, fatigue resistance and quick responding of erasure. The computer-generated holograms carrying with wavefronts of 3D objects are written to the diarylethene polymer, and the recorded holograms in the polymer can be easily erased when exposed in ultraviolet light. The 3D scenes can be reconstructed for the write/erase cycles.

  4. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present. PMID:27410336

  5. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  6. Secure 3D watermarking algorithm based on point set projection

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Zhang, Xiaomei

    2007-11-01

    3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.

  7. 3D nanotube-based composites produced by laser irradiation

    SciTech Connect

    Ageeva, S A; Bobrinetskii, I I; Nevolin, Vladimir K; Podgaetskii, Vitalii M; Selishchev, S V; Simunin, M M; Konov, Vitalii I; Savranskii, V V; Ponomareva, O V

    2009-04-30

    3D nanocomposites have been fabricated through self-assembly under near-IR cw laser irradiation, using four types of multiwalled and single-walled carbon nanotubes produced by chemical vapour deposition, disproportionation on Fe clusters and cathode sputtering in an inert gas. The composites were prepared by laser irradiation of aqueous solutions of bovine serum albumin until the solvent was evaporated off and a homogeneous black material was obtained: modified albumin reinforced with nanotubes. The consistency of the composites ranged from paste-like to glass-like. Atomic force microscopy was used to study the surface morphology of the nanomaterials. The nanocomposites had a 3D quasi-periodic structure formed by almost spherical or toroidal particles 200-500 nm in diameter and 30-40 nm in visible height. Their inner, quasi-periodic structure was occasionally seen through surface microfractures. The density and hardness of the nanocomposites exceed those of microcrystalline albumin powder by 20% and by a factor of 3-5, respectively. (nanostructures)

  8. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  10. Hough-based recognition of complex 3-D road scenes

    NASA Astrophysics Data System (ADS)

    Foresti, Gian L.; Regazzoni, Carlo S.

    1992-02-01

    In this paper, we address the problem of the object recognition in a complex 3-D scene by detecting the 2-D object projection on the image-plane for an autonomous vehicle driving; in particular, the problems of road detection and obstacle avoidance in natural road scenes are investigated. A new implementation of the Hough Transform (HT), called Labeled Hough Transform (LHT), to extract and group symbolic features is here presented; the novelty of this method, in respect to the traditional approach, consists in the capability of splitting a maximum in the parameter space into noncontiguous segments, while performing voting. Results are presented on a road image containing obstacles which show the efficiency, good quality, and time performances of the algorithm.

  11. Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene

    NASA Technical Reports Server (NTRS)

    Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing

    2011-01-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.

  12. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  13. Coniferous canopy BRF simulation based on 3-D realistic scene.

    PubMed

    Wang, Xin-Yun; Guo, Zhi-Feng; Qin, Wen-Han; Sun, Guo-Qing

    2011-09-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigated in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerful in remote sensing of heterogeneous coniferous forests over a large-scale region. L-systems is applied to render 3-D coniferous forest scenarios, and RGM model was used to calculate BRF (bidirectional reflectance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhile at a tree and forest level, the results are also good. PMID:22097856

  14. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    PubMed

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  15. Research on urban rapid 3D modeling and application based on CGA rule

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Jiang, Jian-wu; Zhou, Song; Yin, Shou-qiang

    2015-12-01

    Use CityEngine as the 3D modeling platform, research on urban rapid 3D modeling technology based on the CGA(Computer Generated Architectur) rule , solved the problem of the rapid creation of urban 3D model in large scenes , and research on building texture processing and 3D model optimization techniques based on CGA rule , using component modeling method , solved the problem of texture distortion and model redundancy in the traditional fast modeling 3D model , and development of a three-dimensional view and analysis system based on ArcGIS Engine , realization of 3D model query , distance measurement , specific path flight , 3D marking , Scene export,etc.

  16. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  17. INFORMATION DISPLAY: CONSIDERATIONS FOR DESIGNING COMPUTER-BASED DISPLAY SYSTEMS.

    SciTech Connect

    O'HARA,J.M.; PIRUS,D.; BELTRATCCHI,L.

    2004-09-19

    This paper discussed the presentation of information in computer-based control rooms. Issues associated with the typical displays currently in use are discussed. It is concluded that these displays should be augmented with new displays designed to better meet the information needs of plant personnel and to minimize the need for interface management tasks (the activities personnel have to do to access and organize the information they need). Several approaches to information design are discussed, specifically addressing: (1) monitoring, detection, and situation assessment; (2) routine task performance; and (3) teamwork, crew coordination, collaborative work.

  18. A multimedia Anatomy Browser incorporating a knowledge base and 3D images.

    PubMed Central

    Eno, K.; Sundsten, J. W.; Brinkley, J. F.

    1991-01-01

    We describe a multimedia program for teaching anatomy. The program, called the Anatomy Browser, displays cross-sectional and topographical images, with outlines around structures and regions of interest. The user may point to these structures and retrieve text descriptions, view symbolic relationships between structures, or view spatial relationships by accessing 3-D graphics animations from videodiscs produced specifically for this program. The software also helps students exercise what they have learned by asking them to identify structures by name and location. The program is implemented in a client-server architecture, with the user interface residing on a Macintosh, while images, data, and a growing symbolic knowledge base of anatomy are stored on a fileserver. This architecture allows us to develop practical tutorial modules that are in current use, while at the same time developing the knowledge base that will lead to more intelligent tutorial systems. PMID:1807699

  19. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  20. An Overview of 3d Topology for Ladm-Based Objects

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. A.; Rahman, A. A.; van Oosterom, P.

    2015-10-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological models are based on several main aspects (e.g. space or plane partition, used primitives, constructive rules, orientation and explicit or implicit relationships). The most suitable 3D topological model depends on the type of application it is used for. There is no single 3D topology model best suitable for all types of applications. Therefore, it is very important to define the requirements of the 3D topology model. The context of this paper is a 3D topology for LADM-based objects.

  1. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  2. New 3-D microarray platform based on macroporous polymer monoliths.

    PubMed

    Rober, M; Walter, J; Vlakh, E; Stahl, F; Kasper, C; Tennikova, T

    2009-06-30

    Polymer macroporous monoliths are widely used as efficient sorbents in different, mostly dynamic, interphase processes. In this paper, monolithic materials strongly bound to the inert glass surface are suggested as operative matrices at the development of three-dimensional (3-D) microarrays. For this purpose, several rigid macroporous copolymers differed by reactivity and hydrophobic-hydrophilic properties were synthesized and tested: (1) glycidyl methacrylate-co-ethylene dimethacrylate (poly(GMA-co-EDMA)), (2) glycidyl methacrylate-co-glycerol dimethacrylate (poly(GMA-co-GDMA)), (3) N-hydroxyphthalimide ester of acrylic acid-co-glycidyl methacrylate-co-ethylene dimethacrylate (poly(HPIEAA-co-GMA-co-EDMA)), (4) 2-cyanoethyl methacrylate-co-ethylene dimethacrylate (poly(CEMA-co-EDMA)), and (5) 2-cyanoethyl methacrylate-co-2-hydroxyethyl methacrylate-co-ethylene dimethacrylate (poly(CEMA-co-HEMA-co-EDMA)). The constructed devices were used as platforms for protein microarrays construction and model mouse IgG-goat anti-mouse IgG affinity pair was used to demonstrate the potential of developed test-systems, as well as to optimize microanalytical conditions. The offered microarray platforms were applied to detect the bone tissue marker osteopontin directly in cell culture medium. PMID:19463569

  3. Discovery of pyrido[2,3-d]pyrimidine-based inhibitors of HCV NS5A.

    PubMed

    DeGoey, David A; Betebenner, David A; Grampovnik, David J; Liu, Dachun; Pratt, John K; Tufano, Michael D; He, Wenping; Krishnan, Preethi; Pilot-Matias, Tami J; Marsh, Kennan C; Molla, Akhteruzzaman; Kempf, Dale J; Maring, Clarence J

    2013-06-15

    Efforts to improve the genotype 1a potency and pharmacokinetics of earlier naphthyridine-based HCV NS5A inhibitors resulted in the discovery of a novel series of pyrido[2,3-d]pyrimidine compounds, which displayed potent inhibition of HCV genotypes 1a and 1b in the replicon assay. SAR in this system revealed that the introduction of amides bearing an additional 'E' ring provided compounds with improved potency and pharmacokinetics. Introduction of a chiral center on the amide portion resulted in the observation of a stereochemical dependence for replicon potency and provided a site for the attachment of functional groups useful for improving the solubility of the series. Compound 21 was selected for administration in an HCV-infected chimpanzee. Observation of a robust viral load decline provided positive proof of concept for inhibition of HCV replication in vivo for the compound series. PMID:23642966

  4. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  5. Surface-Area-Based Attribute Filtering in 3D

    NASA Astrophysics Data System (ADS)

    Kiwanuka, Fred N.; Ouzounis, Georgios K.; Wilkinson, Michael H. F.

    In this paper we describe a rotation-invariant attribute filter based on estimating the sphericity or roundness of objects by efficiently computing surface area and volume of connected components. The method is based on an efficient algorithm to compute all iso-surfaces of all nodes in a Max-Tree. With similar properties to moment-based attributes like sparseness, non-compactness, and elongation, our sphericity attribute can supplement these in finding blood-vessels in time-of-flight MR angiograms. We compare the method to a discrete surface area method based on adjacency, which has been used for urinary stone detection. Though the latter is faster, it is less accurate, and lacks rotation invariance.

  6. Perceptual quality measurement of 3D images based on binocular vision.

    PubMed

    Zhou, Wujie; Yu, Lu

    2015-07-20

    Three-dimensional (3D) technology has become immensely popular in recent years and widely adopted in various applications. Hence, perceptual quality measurement of symmetrically and asymmetrically distorted 3D images has become an important, fundamental, and challenging issue in 3D imaging research. In this paper, we propose a binocular-vision-based 3D image-quality measurement (IQM) metric. Consideration of the 3D perceptual properties of the primary visual cortex (V1) and the higher visual areas (V2) for 3D-IQM is the major technical contribution to this research. To be more specific, first, the metric simulates the receptive fields of complex cells (V1) using binocular energy response and binocular rivalry response and the higher visual areas (V2) using local binary patterns features. Then, three similarity scores of 3D perceptual properties between the reference and distorted 3D images are measured. Finally, by using support vector regression, three similarity scores are integrated into an overall 3D quality score. Experimental results for two public benchmark databases demonstrate that, in comparison with most current 2D and 3D metrics, the proposed metric achieves significantly higher consistency in alignment with subjective fidelity ratings. PMID:26367842

  7. Image-based indoor localization system based on 3D SfM model

    NASA Astrophysics Data System (ADS)

    Lu, Guoyu; Kambhamettu, Chandra

    2013-12-01

    Indoor localization is an important research topic for both of the robot and signal processing communities. In recent years, image-based localization is also employed in indoor environment for the easy availability of the necessary equipment. After capturing an image and sending it to an image database, the best matching image is returned with the navigation information. By allowing further camera pose estimation, the image-based localization system with the use of Structure-from-Motion reconstruction model can achieve higher accuracy than the methods of searching through a 2D image database. However, this emerging technique is still only on the use of outdoor environment. In this paper, we introduce the 3D SfM model based image-based localization system into the indoor localization task. We capture images of the indoor environment and reconstruct the 3D model. On the localization task, we simply use the images captured by a mobile to match the 3D reconstructed model to localize the image. In this process, we use the visual words and the approximate nearest neighbor methods to accelerate the process of nding the query feature's correspondences. Within the visual words, we conduct linear search in detecting the correspondences. From the experiments, we nd that the image-based localization method based on 3D SfM model gives good localization result based on both accuracy and speed.

  8. Single view-based 3D face reconstruction robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Lee, Youn Joo; Lee, Sung Joo; Park, Kang Ryoung; Jo, Jaeik; Kim, Jaihie

    2012-12-01

    State-of-the-art 3D morphable model (3DMM) is used widely for 3D face reconstruction based on a single image. However, this method has a high computational cost, and hence, a simplified 3D morphable model (S3DMM) was proposed as an alternative. Unlike the original 3DMM, S3DMM uses only a sparse 3D facial shape, and therefore, it incurs a lower computational cost. However, this method is vulnerable to self-occlusion due to head rotation. Therefore, we propose a solution to the self-occlusion problem in S3DMM-based 3D face reconstruction. This research is novel compared with previous works, in the following three respects. First, self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model. Second, a 3D model fitting scheme is designed based on selected visible facial feature points, which facilitates 3D face reconstruction without any effect from self-occlusion. Third, the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3D model fitting process. The experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered a noticeable improvement in the 3D face reconstruction performance compared with previous methods.

  9. Geofencing-Based Localization for 3d Data Acquisition Navigation

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Kamio, T.; Yasojima, H.; Kobayashi, T.

    2016-06-01

    Users require navigation for many location-based applications using moving sensors, such as autonomous robot control, mapping route navigation and mobile infrastructure inspection. In indoor environments, indoor positioning systems using GNSSs can provide seamless indoor-outdoor positioning and navigation services. However, instabilities in sensor position data acquisition remain, because the indoor environment is more complex than the outdoor environment. On the other hand, simultaneous localization and mapping processing is better than indoor positioning for measurement accuracy and sensor cost. However, it is not easy to estimate position data from a single viewpoint directly. Based on these technical issues, we focus on geofencing techniques to improve position data acquisition. In this research, we propose a methodology to estimate more stable position or location data using unstable position data based on geofencing in indoor environments. We verify our methodology through experiments in indoor environments.

  10. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  11. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  12. Glasses-free large size high-resolution three-dimensional display based on the projector array

    NASA Astrophysics Data System (ADS)

    Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong

    2014-11-01

    Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.

  13. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications. PMID:17706656

  14. The Martian Water Cycle Based on 3-D Modeling

    NASA Technical Reports Server (NTRS)

    Houben, H.; Haberle, R. M.; Joshi, M. M.

    1999-01-01

    Understanding the distribution of Martian water is a major goal of the Mars Surveyor program. However, until the bulk of the data from the nominal missions of TES, PMIRR, GRS, MVACS, and the DS2 probes are available, we are bound to be in a state where much of our knowledge of the seasonal behavior of water is based on theoretical modeling. We therefore summarize the results of this modeling at the present time. The most complete calculations come from a somewhat simplified treatment of the Martian climate system which is capable of simulating many decades of weather. More elaborate meteorological models are now being applied to study of the problem. The results show a high degree of consistency with observations of aspects of the Martian water cycle made by Viking MAWD, a large number of ground-based measurements of atmospheric column water vapor, studies of Martian frosts, and the widespread occurrence of water ice clouds. Additional information is contained in the original extended abstract.

  15. 3 D gravity inversion based on SL0 norm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Xu, Xuechun; Zheng, Changqing

    2015-04-01

    The inversion of three-dimensional geophysical properties (density, magnetic susceptibility, electrical resistivity) has occupies very important position in geophysical interpretation for geophysical interpreters, combining with the corresponding geological data, it will produce a very good solution to solve the corresponding geological problems, especially, in the separate abnormal body of ore bodies .the method would have produce much more good results. There are mainly three kinds of mainstream geophysical inversion methods in the now geophysical inversion method : 1. The minimum model method, 2. the most gentle model method, 3. The smoothest model. The main solution is the optimal solution by solving mixed set equations to solve the corresponding inverse problem, the main difference of the three methods is the differences of the weighting function mode, and in essence, it is to find the best solution based on regularization principle, finally, the reaction of the convergence are obtained. The methods are based on the minimum volume, such as compression inversion and focusing inversion. The two methods also can get much more clearer and sharper boundaries. This abstract choose of the inversion method is based on the theory of minimum volume method. The selection of weighted function can effectively reduce the inversion of the number of iterations and accelerate the rate of inversion. it can conform to the requirements of the current large-scale airborne gravity. Without reducing the quality of the inversion, at the same time, it can accelerate the rate of inversion. The inversion can get the sharp boundary, spatial location, and density attributes of the abnormal body. it needs the quality of the computer performance and geophysical data. Therefore it requests to reduce the random and random noise as far as possible. According to a lot of model tests, It proves that the choice of the weighting function can get very good inversion result. In the inversion

  16. Literary and Historical 3D Digital Game-Based Learning: Design Guidelines

    ERIC Educational Resources Information Center

    Neville, David O.; Shelton, Brett E.

    2010-01-01

    As 3D digital game-based learning (3D-DGBL) for the teaching of literature and history gradually gains acceptance, important questions will need to be asked regarding its method of design, development, and deployment. This article offers a synthesis of contemporary pedagogical, instructional design, new media, and literary-historical theories to…

  17. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  18. 3D, wideband vibro-impacting-based piezoelectric energy harvester

    SciTech Connect

    Yu, Qiangmo; Yang, Jin Yue, Xihai; Yang, Aichao; Zhao, Jiangxin; Zhao, Nian; Wen, Yumei; Li, Ping

    2015-04-15

    An impacting-based piezoelectric energy harvester was developed to address the limitations of the existing approaches in single-dimensional operation as well as a narrow working bandwidth. In the harvester, a spiral cylindrical spring rather than the conventional thin cantilever beam was utilized to extract the external vibration with arbitrary directions, which has the capability to impact the surrounding piezoelectric beams to generate electricity. And the introduced vibro-impacting between the spiral cylindrical spring and multi-piezoelectric-beams resulted in not only a three-dimensional response to external vibration, but also a bandwidth-broadening behavior. The experimental results showed that each piezoelectric beam exhibited a maximum bandwidth of 8 Hz and power of 41 μW with acceleration of 1 g (with g=9.8 ms{sup −2}) along the z-axis, and corresponding average values of 5 Hz and 45 μW with acceleration of 0.6 g in the x-y plane. .

  19. True-Depth: a new type of true 3D volumetric display system suitable for CAD, medical imaging, and air-traffic control

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Floating Images, Inc. is developing a new type of volumetric monitor capable of producing a high-density set of points in 3D space. Since the points of light actually exist in space, the resulting image can be viewed with continuous parallax, both vertically and horizontally, with no headache or eyestrain. These 'real' points in space are always viewed with a perfect match between accommodation and convergence. All scanned points appear to the viewer simultaneously, making this display especially suitable for CAD, medical imaging, air-traffic control, and various military applications. This system has the potential to display imagery so accurately that a ruler could be placed within the aerial image to provide precise measurement in any direction. A special virtual imaging arrangement allows the user to superimpose 3D images on a solid object, making the object look transparent. This is particularly useful for minimally invasive surgery in which the internal structure of a patient is visible to a surgeon in 3D. Surgical procedures can be carried out through the smallest possible hole while the surgeon watches the procedure from outside the body as if the patient were transparent. Unlike other attempts to produce volumetric imaging, this system uses no massive rotating screen or any screen at all, eliminating down time due to breakage and possible danger due to potential mechanical failure. Additionally, it is also capable of displaying very large images.

  20. Indoor 3D Route Modeling Based On Estate Spatial Data

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Wen, Y.; Jiang, J.; Huang, W.

    2014-04-01

    Indoor three-dimensional route model is essential for space intelligence navigation and emergency evacuation. This paper is motivated by the need of constructing indoor route model automatically and as far as possible. By comparing existing building data sources, this paper firstly explained the reason why the estate spatial management data is chosen as the data source. Then, an applicable method of construction three-dimensional route model in a building is introduced by establishing the mapping relationship between geographic entities and their topological expression. This data model is a weighted graph consist of "node" and "path" to express the spatial relationship and topological structure of a building components. The whole process of modelling internal space of a building is addressed by two key steps: (1) each single floor route model is constructed, including path extraction of corridor using Delaunay triangulation algorithm with constrained edge, fusion of room nodes into the path; (2) the single floor route model is connected with stairs and elevators and the multi-floor route model is eventually generated. In order to validate the method in this paper, a shopping mall called "Longjiang New City Plaza" in Nanjing is chosen as a case of study. And the whole building space is constructed according to the modelling method above. By integrating of existing path finding algorithm, the usability of this modelling method is verified, which shows the indoor three-dimensional route modelling method based on estate spatial data in this paper can support indoor route planning and evacuation route design very well.

  1. Electrochemical signal amplification for immunosensor based on 3D interdigitated array electrodes.

    PubMed

    Han, Donghoon; Kim, Yang-Rae; Kang, Chung Mu; Chung, Taek Dong

    2014-06-17

    We devised an electrochemical redox cycling based on three-dimensional interdigitated array (3D IDA) electrodes for signal amplification to enhance the sensitivity of chip-based immunosensors. The 3D IDA consists of two closely spaced parallel indium tin oxide (ITO) electrodes that are positioned not only on the bottom but also the ceiling, facing each other along a microfluidic channel. We investigated the signal intensities from various geometric configurations: Open-2D IDA, Closed-2D IDA, and 3D IDA through electrochemical experiments and finite-element simulations. The 3D IDA among the four different systems exhibited the greatest signal amplification resulting from efficient redox cycling of electroactive species confined in the microchannel so that the faradaic current was augmented by a factor of ∼100. We exploited the enhanced sensitivity of the 3D IDA to build up a chronocoulometric immunosensing platform based on the sandwich enzyme-linked immunosorbent assay (ELISA) protocol. The mouse IgGs on the 3D IDA showed much lower detection limits than on the Closed-2D IDA. The detection limit for mouse IgG measured using the 3D IDA was ∼10 fg/mL, while it was ∼100 fg/mL for the Closed-2D IDA. Moreover, the proposed immunosensor system with the 3D IDA successfully worked for clinical analysis as shown by the sensitive detection of cardiac troponin I in human serum down to 100 fg/mL. PMID:24842332

  2. Pep-3D-Search: a method for B-cell epitope prediction based on mimotope analysis

    PubMed Central

    Huang, Yan Xin; Bao, Yong Li; Guo, Shu Yan; Wang, Yan; Zhou, Chun Guang; Li, Yu Xin

    2008-01-01

    Background The prediction of conformational B-cell epitopes is one of the most important goals in immunoinformatics. The solution to this problem, even if approximate, would help in designing experiments to precisely map the residues of interaction between an antigen and an antibody. Consequently, this area of research has received considerable attention from immunologists, structural biologists and computational biologists. Phage-displayed random peptide libraries are powerful tools used to obtain mimotopes that are selected by binding to a given monoclonal antibody (mAb) in a similar way to the native epitope. These mimotopes can be considered as functional epitope mimics. Mimotope analysis based methods can predict not only linear but also conformational epitopes and this has been the focus of much research in recent years. Though some algorithms based on mimotope analysis have been proposed, the precise localization of the interaction site mimicked by the mimotopes is still a challenging task. Results In this study, we propose a method for B-cell epitope prediction based on mimotope analysis called Pep-3D-Search. Given the 3D structure of an antigen and a set of mimotopes (or a motif sequence derived from the set of mimotopes), Pep-3D-Search can be used in two modes: mimotope or motif. To evaluate the performance of Pep-3D-Search to predict epitopes from a set of mimotopes, 10 epitopes defined by crystallography were compared with the predicted results from a Pep-3D-Search: the average Matthews correlation oefficient (MCC), sensitivity and precision were 0.1758, 0.3642 and 0.6948. Compared with other available prediction algorithms, Pep-3D-Search showed comparable MCC, specificity and precision, and could provide novel, rational results. To verify the capability of Pep-3D-Search to align a motif sequence to a 3D structure for predicting epitopes, 6 test cases were used. The predictive performance of Pep-3D-Search was demonstrated to be superior to that of other

  3. Characterization of linearity and uniformity of fiber-based endoscopes for 3D combustion measurements.

    PubMed

    Kang, MinWook; Lei, Qingchun; Ma, Lin

    2014-09-10

    This work reports the application of fiber-based endoscopes (FBEs) for instantaneous three-dimensional (3D) flow and combustion measurements, with an emphasis on characterizing the linearity and uniformity of the FBEs and exploring their potential for obtaining quantitative measurements. Controlled experiments were performed using a uniform illuminator to characterize the linearity and uniformity of the FBEs. Based on such characterization, 3D instantaneous measurements of flames were demonstrated by a combined use of FBEs and tomography. To obtain 3D flame measurement, 3D tomographic reconstructions were made from multiple projections of the target flames collected from various orientations by the FBEs. The results illustrate the potential of FBEs to obtain quantitative 3D flow and combustion measurements and also the advantages FBEs offer, including overcoming optical access restrictions and equipment cost. PMID:25321676

  4. Displaying Geographically-Based Domestic Statistics

    NASA Technical Reports Server (NTRS)

    Quann, J.; Dalton, J.; Banks, M.; Helfer, D.; Szczur, M.; Winkert, G.; Billingsley, J.; Borgstede, R.; Chen, J.; Chen, L.; Fuh, J.; Cyprych, E.

    1982-01-01

    Decision Information Display System (DIDS) is rapid-response information-retrieval and color-graphics display system. DIDS transforms tables of geographically-based domestic statistics (such as population or unemployment by county, energy usage by county, or air-quality figures) into high-resolution, color-coded maps on television display screen.

  5. Comparison Between Two Generic 3d Building Reconstruction Approaches - Point Cloud Based VS. Image Processing Based

    NASA Astrophysics Data System (ADS)

    Dahlke, D.; Linkiewicz, M.

    2016-06-01

    This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.

  6. Midsagittal plane extraction from brain images based on 3D SIFT

    NASA Astrophysics Data System (ADS)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-01

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  7. 3D real holographic image movies are projected into a volumetric display using dynamic digital micromirror device (DMD) holograms.

    NASA Astrophysics Data System (ADS)

    Huebschman, Michael L.; Hunt, Jeremy; Garner, Harold R.

    2006-04-01

    The Texas Instruments Digital Micromirror Device (DMD) is being used as the recording medium for display of pre-calculated digital holograms. The high intensity throughput of the reflected laser light from DMD holograms enables volumetric display of projected real images as well as virtual images. A single DMD and single laser projector system has been designed to reconstruct projected images in a 6''x 6''x 4.5'' volumetric display. The volumetric display is composed of twenty-four, 6''-square, PSCT liquid crystal plates which are each cycled on and off to reduce unnecessary scatter in the volume. The DMD is an XGA format array, 1024x768, with 13.6 micron pitch mirrors. This holographic projection system has been used in the assessment of hologram image resolution, maximum image size, optical focusing of the real image, image look-around, and physiological depth cues. Dynamic movement images are projected by transferring the appropriately sequenced holograms to the DMD at movie frame rates.

  8. The National 3-D Geospatial Information Web-Based Service of Korea

    NASA Astrophysics Data System (ADS)

    Lee, D. T.; Kim, C. W.; Kang, I. G.

    2013-09-01

    3D geospatial information systems should provide efficient spatial analysis tools and able to use all capabilities of the third dimension, and a visualization. Currently, many human activities make steps toward the third dimension like land use, urban and landscape planning, cadastre, environmental monitoring, transportation monitoring, real estate market, military applications, etc. To reflect this trend, the Korean government has been started to construct the 3D geospatial data and service platform. Since the geospatial information was introduced in Korea, the construction of geospatial information (3D geospatial information, digital maps, aerial photographs, ortho photographs, etc.) has been led by the central government. The purpose of this study is to introduce the Korean government-lead 3D geospatial information web-based service for the people who interested in this industry and we would like to introduce not only the present conditions of constructed 3D geospatial data but methodologies and applications of 3D geospatial information. About 15% (about 3,278.74 km2) of the total urban area's 3D geospatial data have been constructed by the national geographic information institute (NGII) of Korea from 2005 to 2012. Especially in six metropolitan cities and Dokdo (island belongs to Korea) on level of detail (LOD) 4 which is photo-realistic textured 3D models including corresponding ortho photographs were constructed in 2012. In this paper, we represented web-based 3D map service system composition and infrastructure and comparison of V-world with Google Earth service will be presented. We also represented Open API based service cases and discussed about the protection of location privacy when we construct 3D indoor building models. In order to prevent an invasion of privacy, we processed image blurring, elimination and camouflage. The importance of public-private cooperation and advanced geospatial information policy is emphasized in Korea. Thus, the progress of

  9. Blind watermark algorithm on 3D motion model based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Qi, Hu; Zhai, Lang

    2013-12-01

    With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.

  10. 3D graphene-based hybrid materials: synthesis and applications in energy storage and conversion.

    PubMed

    Shi, Qiurong; Cha, Younghwan; Song, Yang; Lee, Jung-In; Zhu, Chengzhou; Li, Xiaoyu; Song, Min-Kyu; Du, Dan; Lin, Yuehe

    2016-08-25

    Porous 3D graphene-based hybrid materials (3D GBHMs) are currently attractive nanomaterials employed in the field of energy. Heteroatom-doped 3D graphene and metal, metal oxide, and polymer-decorated 3D graphene with modified electronic and atomic structures provide promising performance as electrode materials in energy storage and conversion. Numerous synthesis methods such as self-assembly, templating, electrochemical deposition, and supercritical CO2, pave the way to mass production of 3D GBHMs in the commercialization of energy devices. This review summarizes recent advances in the fabrication of 3D GBHMs with well-defined architectures such as finely controlled pore sizes, heteroatom doping types and levels. Moreover, current progress toward applications in fuel cells, supercapacitors and batteries employing 3D GBHMs is also highlighted, along with the detailed mechanisms of the enhanced electrochemical performance. Furthermore, current critical issues, challenges and future prospects with respect to applications of 3D GBHMs in practical devices are discussed at the end of this review. PMID:27531643

  11. Sensor based 3D conformal cueing for safe and reliable HC operation specifically for landing in DVE

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Kress, Martin; Klasen, Stephanus

    2013-05-01

    The paper describes the approach of a sensor based landing aid for helicopters in degraded visual conditions. The system concept presented employs a long range high resolution ladar sensor allowing for identifying obstacles in the flight and in the approach path as well as measuring landing site conditions like slope, roughness and precise position relative to the helicopter during long final approach. All these measurements are visualized to the pilot. Cueing is done by 3D conformal symbology displayed in a head-tracked HMD enhanced by 2D symbols for data which is perceived easier by 2D symbols than by 3D cueing. All 3D conformal symbology is placed on the measured landing site surface which is further visualized by a grid structure for displaying landing site slope, roughness and small obstacles. Due to the limited resolution of the employed HMD a specific scheme of blending in the information during the approach is employed. The interplay between in flight and in approach obstacle warning and CFIT warning symbology with this landing aid symbology is also investigated and exemplarily evaluated for the NH90 helicopter which has already today implemented a long range high resolution ladar sensor based obstacle warning and CFIT symbology. The paper further describes the results of simulator and flight tests performed with this system employing a ladar sensor and a head-tracked head mounted display system. In the simulator trials a full model of the ladar sensor producing 3D measurement points was used working with the same algorithms used in flight tests.

  12. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    NASA Astrophysics Data System (ADS)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  13. Characterization of the 3D resolution of topometric sensors based on fringe and speckle pattern projection by a 3D transfer function

    NASA Astrophysics Data System (ADS)

    Berssenbrügge, Philipp; Dekiff, Markus; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2012-03-01

    The increasing importance of optical 3D measurement techniques and the growing number of available methods and systems require a fast and simple method to characterize the measurement accuracy. However, the conventional approach of comparing measured coordinates to known reference coordinates of a test target faces two major challenges: the precise fabrication of the target and - in case of pattern projecting systems - finding the position of the reference points in the obtained point cloud. The modulation transfer function (MTF) on the other hand is an established instrument to describe the resolution characteristics of 2D imaging systems. Here, the MTF concept is applied to two different topometric systems based on fringe and speckle pattern projection to obtain a 3D transfer function. We demonstrate that in the present case fringe projection provides typically 3.5 times the 3D resolution achieved with speckle pattern projection. By combining measurements of the 3D transfer function with 2D MTF measurements the dependency of 2D and 3D resolutions are characterized. We show that the method allows for a simple comparison of the 3D resolution of two 3D sensors using a low cost test target, which is easy to manufacture.

  14. Binding affinity prediction of novel estrogen receptor ligands using receptor-based 3-D QSAR methods.

    PubMed

    Sippl, Wolfgang

    2002-12-01

    We have recently reported the development of a 3-D QSAR model for estrogen receptor ligands showing a significant correlation between calculated molecular interaction fields and experimentally measured binding affinity. The ligand alignment obtained from docking simulations was taken as basis for a comparative field analysis applying the GRID/GOLPE program. Using the interaction field derived with a water probe and applying the smart region definition (SRD) variable selection procedure, a significant and robust model was obtained (q(2)(LOO)=0.921, SDEP=0.345). To further analyze the robustness and the predictivity of the established model several recently developed estrogen receptor ligands were selected as external test set. An excellent agreement between predicted and experimental binding data was obtained indicated by an external SDEP of 0.531. Two other traditionally used prediction techniques were applied in order to check the performance of the receptor-based 3-D QSAR procedure. The interaction energies calculated on the basis of receptor-ligand complexes were correlated with experimentally observed affinities. Also ligand-based 3-D QSAR models were generated using program FlexS. The interaction energy-based model, as well as the ligand-based 3-D QSAR models yielded models with lower predictivity. The comparison with the interaction energy-based model and with the ligand-based 3-D QSAR models, respectively, indicates that the combination of receptor-based and 3-D QSAR methods is able to improve the quality of prediction. PMID:12413831

  15. Web-Based 3D and Haptic Interactive Environments for e-Learning, Simulation, and Training

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix G.; Sopin, Ivan

    Knowledge creation occurs in the process of social interaction. As our service-based society is evolving into a knowledge-based society, there is an acute need for more effective collaboration and knowledge-sharing systems to be used by geographically scattered people. We present the use of 3D components and standards, such as Web3D, in combination with the haptic paradigm, for e-Learning and simulation.

  16. Large bulk-yard 3D measurement based on videogrammetry and projected contour aiding

    NASA Astrophysics Data System (ADS)

    Ou, Jianliang; Zhang, Xiaohu; Yuan, Yun; Zhu, Xianwei

    2011-07-01

    Fast and accurate 3D measurement of large stack-yard is important job in bulk load-and-unload and logistics management. Stack-yard holds its special characteristics as: complex and irregular shape, single surface texture and low material reflectivity, thus its 3D measurement is quite difficult to be realized by traditional non-contacting methods, such as LiDAR(LIght Detecting And Ranging) and photogrammetry. Light-section is good at the measurement of small bulk-flow but not suitable for large-scale bulk-yard yet. In the paper, an improved method based on stereo cameras and laser-line projector is proposed. The due theoretical model is composed from such three key points: corresponding point of contour edge matching in stereo imagery based on gradient and epipolar-line constraint, 3D point-set calculating for stereo imagery projected-contour edge with least square adjustment and forward intersection, then the projected 3D-contour reconstructed by RANSAC(RANdom SAmpling Consensus) and contour spatial features from 3D point-set of single contour edge. In this way, stack-yard surface can be scanned easily by the laser-line projector, and certain region's 3D shape can be reconstructed automatically by stereo cameras on an observing position. Experiment proved the proposed method is effective for bulk-yard 3D measurement in fast, automatic, reliable and accurate way.

  17. Recent improvements in SPE3D: a VR-based surgery planning environment

    NASA Astrophysics Data System (ADS)

    Witkowski, Marcin; Sitnik, Robert; Verdonschot, Nico

    2014-02-01

    SPE3D is a surgery planning environment developed within TLEMsafe project [1] (funded by the European Commission FP7). It enables the operator to plan a surgical procedure on the customized musculoskeletal (MS) model of the patient's lower limbs, send the modified model to the biomechanical analysis module, and export the scenario's parameters to the surgical navigation system. The personalized patient-specific three-dimensional (3-D) MS model is registered with 3-D MRI dataset of lower limbs and the two modalities may be visualized simultaneously. Apart from main planes, any arbitrary MRI cross-section can be rendered on the 3-D MS model in real time. The interface provides tools for: bone cutting, manipulating and removal, repositioning muscle insertion points, modifying muscle force, removing muscles and placing implants stored in the implant library. SPE3D supports stereoscopic viewing as well as natural inspection/manipulation with use of haptic devices. Alternatively, it may be controlled with use of a standard computer keyboard, mouse and 2D display or a touch screen (e.g. in an operating room). The interface may be utilized in two main fields. Experienced surgeons may use it to simulate their operative plans and prepare input data for a surgical navigation system while student or novice surgeons can use it for training.

  18. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  19. Fish body surface data measurement based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Qian, Chen; Yang, Wenkai

    2016-01-01

    To film the moving fish in the glass tank, light will be bent at the interface of air and glass, glass and water. Based on binocular stereo vision and refraction principle, we establish a mathematical model of 3D image correlation to reconstruct the 3D coordinates of samples in the water. Marking speckle in fish surface, a series of real-time speckle images of swimming fish will be obtained by two high-speed cameras, and instantaneous 3D shape, strain, displacement etc. of fish will be reconstructed.

  20. Toner display based on particle control technologies

    NASA Astrophysics Data System (ADS)

    Kitamura, Takashi

    2011-03-01

    Toner Display is based on an electrical movement of charged particles. Two types of black toner and white particles charged in the different electric polarity are enclosed between two electrodes. The particle movement is controlled by the external electric field applied between two transparent electrodes. The toner is collected to the electrode by an electrostatic force across the insulating layer to display a black image. The toners can be put back to the counter electrode by applying a reverse electric field, and white solid image is displayed. We have studied on the movement of three color particles independently to display color image in Toner Display. Two positively charged color particles with different amount of charge to mass ratio and negatively charged white particles were enclosed in the toner display cell. Yellow, cyan and white images were displayed by an application of voltage.

  1. A 3D-Video-Based Computerized Analysis of Social and Sexual Interactions in Rats

    PubMed Central

    Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238

  2. A 3D-video-based computerized analysis of social and sexual interactions in rats.

    PubMed

    Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238

  3. Quantitative Analysis and Modeling of 3-D TSV-Based Power Delivery Architectures

    NASA Astrophysics Data System (ADS)

    He, Huanyu

    As 3-D technology enters the commercial production stage, it is critical to understand different 3-D power delivery architectures on the stacked ICs and packages with through-silicon vias (TSVs). Appropriate design, modeling, analysis, and optimization approaches of the 3-D power delivery system are of foremost significance and great practical interest to the semiconductor industry in general. Based on fundamental physics of 3-D integration components, the objective of this thesis work is to quantitatively analyze the power delivery for 3D-IC systems, develop appropriate physics-based models and simulation approaches, understand the key issues, and provide potential solutions for design of 3D-IC power delivery architectures. In this work, a hybrid simulation approach is adopted as the major approach along with analytical method to examine 3-D power networks. Combining electromagnetic (EM) tools and circuit simulators, the hybrid approach is able to analyze and model micrometer-scale components as well as centimeter-scale power delivery system with high accuracy and efficiency. The parasitic elements of the components on the power delivery can be precisely modeled by full-wave EM solvers. Stack-up circuit models for the 3-D power delivery networks (PDNs) are constructed through a partition and assembly method. With the efficiency advantage of the SPICE circuit simulation, the overall 3-D system power performance can be analyzed and the 3-D power delivery architectures can be evaluated in a short computing time. The major power delivery issues are the voltage drop (IR drop) and voltage noise. With a baseline of 3-D power delivery architecture, the on-chip PDNs of TSV-based chip stacks are modeled and analyzed for the IR drop and AC noise. The basic design factors are evaluated using the hybrid approach, such as the number of stacked chips, the number of TSVs, and the TSV arrangement. Analytical formulas are also developed to evaluate the IR drop in 3-D chip stack in

  4. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    SciTech Connect

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  5. Fast and Precise 3D Fluorophore Localization based on Gradient Fitting

    NASA Astrophysics Data System (ADS)

    Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang

    2015-09-01

    Astigmatism imaging approach has been widely used to encode the fluorophore’s 3D position in single-particle tracking and super-resolution localization microscopy. Here, we present a new high-speed localization algorithm based on gradient fitting to precisely decode the 3D subpixel position of the fluorophore. This algebraic algorithm determines the center of the fluorescent emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the fluorophore in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising high-speed analyzing method for 3D particle tracking and super-resolution localization microscopy.

  6. Fast and Precise 3D Fluorophore Localization based on Gradient Fitting

    PubMed Central

    Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang

    2015-01-01

    Astigmatism imaging approach has been widely used to encode the fluorophore’s 3D position in single-particle tracking and super-resolution localization microscopy. Here, we present a new high-speed localization algorithm based on gradient fitting to precisely decode the 3D subpixel position of the fluorophore. This algebraic algorithm determines the center of the fluorescent emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the fluorophore in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising high-speed analyzing method for 3D particle tracking and super-resolution localization microscopy. PMID:26390959

  7. 3D microporous base-functionalized covalent organic frameworks for size-selective catalysis.

    PubMed

    Fang, Qianrong; Gu, Shuang; Zheng, Jie; Zhuang, Zhongbin; Qiu, Shilun; Yan, Yushan

    2014-03-10

    The design and synthesis of 3D covalent organic frameworks (COFs) have been considered a challenge, and the demonstrated applications of 3D COFs have so far been limited to gas adsorption. Herein we describe the design and synthesis of two new 3D microporous base-functionalized COFs, termed BF-COF-1 and BF-COF-2, by the use of a tetrahedral alkyl amine, 1,3,5,7-tetraaminoadamantane (TAA), combined with 1,3,5-triformylbenzene (TFB) or triformylphloroglucinol (TFP). As catalysts, both BF-COFs showed remarkable conversion (96% for BF-COF-1 and 98% for BF-COF-2), high size selectivity, and good recyclability in base-catalyzed Knoevenagel condensation reactions. This study suggests that porous functionalized 3D COFs could be a promising new class of shape-selective catalysts. PMID:24604810

  8. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  9. 3D face reconstruction from limited images based on differential evolution

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-09-01

    3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.

  10. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  11. Survey of projection-based immersive displays

    NASA Astrophysics Data System (ADS)

    Wright, Dan

    2000-05-01

    Projection-based immersive displays are rapidly becoming the visualization system of choice for applications requiring the comprehension of complex datasets and the collaborative sharing of insights. The wide variety of display configurations can be grouped into five categories: benches, flat-screen walls, curved-screen theaters, concave-screen domes and spatially-immersive rooms. Each have their strengths and weaknesses with the appropriateness of each dependent on one's application and budget. The paper outlines the components common to all projection-based displays and describes the characteristics of each particular category. Key image metrics, implementation considerations and immersive display trends are also considered.

  12. 3-D statistical cancer atlas-based targeting of prostate biopsy using ultrasound image guidance

    NASA Astrophysics Data System (ADS)

    Narayanan, Ramkrishnan; Shen, Dinggang; Davatzikos, Christos A.; Crawford, E. David; Barqawi, Albaha; Werahera, Priya; Kumar, Dinesh; Suri, Jasjit S.

    2008-03-01

    Prostate cancer is a multifocal disease and lesions are not distributed uniformly within the gland. Several biopsy protocols concerning spatially specific targeting have been reported urology literature. Recently a statistical cancer atlas of the prostate was constructed providing voxelwise probabilities of cancers in the prostate. Additionally an optimized set of biopsy sites was computed with 94 - 96% detection accuracy was reported using only 6-7 needles. Here we discuss the warping of this atlas to prostate segmented side-fire ultrasound images of the patient. A shape model was used to speed up registration. The model was trained from over 38 expert segmented subjects off-line. This training yielded as few as 15-20 degrees of freedom that were optimized to warp the atlas surface to the patient's ultrasound image followed by elastic interpolation of the 3-D atlas. As a result the atlas is completely mapped to the patient's prostate anatomy along with optimal predetermined needle locations for biopsy. These do not preclude the use of additional biopsies if desired. A color overlay of the atlas is also displayed on the ultrasound image showing high cancer zones within the prostate. Finally current biopsy locations are saved in the atlas space and may be used to update the atlas based on the pathology report. In addition to the optimal atlas plan, previous biopsy locations and alternate plans can also be stored in the atlas space and warped to the patient with no additional time overhead.

  13. Application to monitoring of tailings dam based on 3D laser scanning technology

    NASA Astrophysics Data System (ADS)

    Ren, Fang; Zhang, Aiwu

    2011-06-01

    This paper presented a new method of monitoring of tailing dam based on 3D laser scanning technology and gave the method flow of acquiring and processing the tailing dam data. Taking the measured data for example, the author analyzed the dam deformation by generating the TIN, DEM and the curvature graph, and proved that it's feasible to global monitor the tailing dam using 3D laser scanning technology from the theory and method.

  14. Graphene Oxide-Based Electrode Inks for 3D-Printed Lithium-Ion Batteries.

    PubMed

    Fu, Kun; Wang, Yibo; Yan, Chaoyi; Yao, Yonggang; Chen, Yanan; Dai, Jiaqi; Lacey, Steven; Wang, Yanbin; Wan, Jiayu; Li, Tian; Wang, Zhengyang; Xu, Yue; Hu, Liangbing

    2016-04-01

    All-component 3D-printed lithium-ion batteries are fabricated by printing graphene-oxide-based composite inks and solid-state gel polymer electrolyte. An entirely 3D-printed full cell features a high electrode mass loading of 18 mg cm(-2) , which is normalized to the overall area of the battery. This all-component printing can be extended to the fabrication of multidimensional/multiscale complex-structures of more energy-storage devices. PMID:26833897

  15. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  16. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  17. Current status of 3D EPID-based in vivo dosimetry in The Netherlands Cancer Institute

    NASA Astrophysics Data System (ADS)

    Mijnheer, B.; Olaciregui-Ruiz, I.; Rozendaal, R.; Spreeuw, H.; van Herk, M.; Mans, A.

    2015-01-01

    3D in vivo dose verification using a-Si EPIDs is performed routinely in our institution for almost all RT treatments. The EPID-based 3D dose distribution is reconstructed using a back-projection algorithm and compared with the planned dose distribution using 3D gamma evaluation. Dose-reconstruction and gamma-evaluation software runs automatically, and deviations outside the alert criteria are immediately available and investigated, in combination with inspection of cone-beam CT scans. The implementation of our 3D EPID- based in vivo dosimetry approach was able to replace pre-treatment verification for more than 90% of the patient treatments. Clinically relevant deviations could be detected for approximately 1 out of 300 patient treatments (IMRT and VMAT). Most of these errors were patient related anatomical changes or deviations from the routine clinical procedure, and would not have been detected by pre-treatment verification. Moreover, 3D EPID-based in vivo dose verification is a fast and accurate tool to assure the safe delivery of RT treatments. It provides clinically more useful information and is less time consuming than pre-treatment verification measurements. Automated 3D in vivo dosimetry is therefore a prerequisite for large-scale implementation of patient-specific quality assurance of RT treatments.

  18. Photo-crosslinkable hydrogel-based 3D microfluidic culture device.

    PubMed

    Lee, Youlee; Lee, Jong Min; Bae, Pan-Kee; Chung, Il Yup; Chung, Bong Hyun; Chung, Bong Geun

    2015-04-01

    We developed the photo-crosslinkable hydrogel-based 3D microfluidic device to culture neural stem cells (NSCs) and tumors. The photo-crosslinkable gelatin methacrylate (GelMA) polymer was used as a physical barrier in the microfluidic device and collagen type I gel was employed to culture NSCs in a 3D manner. We demonstrated that the pore size was inversely proportional to concentrations of GelMA hydrogels, showing the pore sizes of 5 and 25 w/v% GelMA hydrogels were 34 and 4 μm, respectively. It also revealed that the morphology of pores in 5 w/v% GelMA hydrogels was elliptical shape, whereas we observed circular-shaped pores in 25 w/v% GelMA hydrogels. To culture NSCs and tumors in the 3D microfluidic device, we investigated the molecular diffusion properties across GelMA hydrogels, indicating that 25 w/v% GelMA hydrogels inhibited the molecular diffusion for 6 days in the 3D microfluidic device. In contrast, the chemicals were diffused in 5 w/v% GelMA hydrogels. Finally, we cultured NSCs and tumors in the hydrogel-based 3D microfluidic device, showing that 53-75% NSCs differentiated into neurons, while tumors were cultured in the collagen gels. Therefore, this photo-crosslinkable hydrogel-based 3D microfluidic culture device could be a potentially powerful tool for regenerative tissue engineering applications. PMID:25641332

  19. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  20. Websim3d: A Web-based System for Generation, Storage and Dissemination of Earthquake Ground Motion Simulations.

    NASA Astrophysics Data System (ADS)

    Olsen, K. B.

    2003-12-01

    Synthetic time histories from large-scale 3D ground motion simulations generally constitute large 'data' sets which typically require 100's of Mbytes or Gbytes of storage capacity. For the same reason, getting access to a researchers simulation output, for example for an earthquake engineer to perform site analysis, or a seismologist to perform seismic hazard analysis, can be a tedious procedure. To circumvent this problem we have developed a web-based ``community model'' (websim3D) for the generation, storage, and dissemination of ground motion simulation results. Websim3D allows user-friendly and fast access to view and download such simulation results for an earthquake-prone area. The user selects an earthquake scenario from a map of the region, which brings up a map of the area where simulation data is available. Now, by clicking on an arbitrary site location, synthetic seismograms and/or soil parameters for the site can be displayed at fixed or variable scaling and/or downloaded. Websim3D relies on PHP scripts for the dynamic plots of synthetic seismograms and soil profiles. Although not limited to a specific area, we illustrate the community model for simulation results from the Los Angeles basin, Wellington (New Zealand), and Mexico.

  1. 3D Coronal Magnetic Field Reconstruction Based on Infrared Polarimetric Observations

    NASA Astrophysics Data System (ADS)

    Kramar, M.; Lin, H.; Tomczyk, S.

    2014-12-01

    Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal phenomena at all scales. A significant progress has been recently achieved here with deployment of the Coronal Multichannel Polarimeter (CoMP) of the High Altitude Observatory (HAO). The instrument provides polarization measurements of Fe xiii 10747 A forbidden line emission. The observed polarization are the result of a line-of-sight (LOS) integration through a nonuniform temperature, density and magnetic field distribution. In order resolve the LOS problem and utilize this type of data, the vector tomography method has been developed for 3D reconstruction of the coronal magnetic field. The 3D electron density and temperature, needed as additional input, have been reconstructed by tomography method based on STEREO/EUVI data. We will present the 3D coronal magnetic field and associated 3D curl B, density, and temperature resulted from these inversions.

  2. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  3. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  4. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  5. Robot navigation in cluttered 3-D environments using preference-based fuzzy behaviors.

    PubMed

    Shi, Dongqing; Collins, Emmanuel G; Dunlap, Damion

    2007-12-01

    Autonomous navigation systems for mobile robots have been successfully deployed for a wide range of planar ground-based tasks. However, very few counterparts of previous planar navigation systems were developed for 3-D motion, which is needed for both unmanned aerial and underwater vehicles. A novel fuzzy behavioral scheme for navigating an unmanned helicopter in cluttered 3-D spaces is developed. The 3-D navigation problem is decomposed into several identical 2-D navigation subproblems, each of which is solved by using preference-based fuzzy behaviors. Due to the shortcomings of vector summation during the fusion of the 2-D subproblems, instead of directly outputting steering subdirections by their own defuzzification processes, the intermediate preferences of the subproblems are fused to create a 3-D solution region, representing degrees of preference for the robot movement. A new defuzzification algorithm that steers the robot by finding the centroid of a 3-D convex region of maximum volume in the 3-D solution region is developed. A fuzzy speed-control system is also developed to ensure efficient and safe navigation. Substantial simulations have been carried out to demonstrate that the proposed algorithm can smoothly and effectively guide an unmanned helicopter through unknown and cluttered urban and forest environments. PMID:18179068

  6. Volumetric CT-based segmentation of NSCLC using 3D-Slicer

    PubMed Central

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H.; van Baardwijk, Angela; Fennessy, Fiona M.; Lewis, John H.; De Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J. W. L.

    2013-01-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the “gold standard”. The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81–0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck. PMID:24346241

  7. Volumetric CT-based segmentation of NSCLC using 3D-Slicer.

    PubMed

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H; van Baardwijk, Angela; Fennessy, Fiona M; Lewis, John H; De Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J W L

    2013-01-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the "gold standard". The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81-0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck. PMID:24346241

  8. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max). PMID:18044549

  9. Sketch on dynamic gesture tracking and analysis exploiting vision-based 3D interface

    NASA Astrophysics Data System (ADS)

    Woo, Woontack; Kim, Namgyu; Wong, Karen; Tadenuma, Makoto

    2000-12-01

    In this paper, we propose a vision-based 3D interface exploiting invisible 3D boxes, arranged in the personal space (i.e. reachable space by the body without traveling), which allows robust yet simple dynamic gesture tracking and analysis, without exploiting complicated sensor-based motion tracking systems. Vision-based gesture tracking and analysis is still a challenging problem, even though we have witnessed rapid advances in computer vision over the last few decades. The proposed framework consists of three main parts, i.e. (1) object segmentation without bluescreen and 3D box initialization with depth information, (2) movement tracking by observing how the body passes through the 3D boxes in the personal space and (3) movement feature extraction based on Laban's Effort theory and movement analysis by mapping features to meaningful symbols using time-delay neural networks. Obviously, exploiting depth information using multiview images improves the performance of gesture analysis by reducing the errors introduced by simple 2D interfaces In addition, the proposed box-based 3D interface lessens the difficulties in both tracking movement in 3D space and in extracting low-level features of the movement. Furthermore, the time-delay neural networks lessens the difficulties in movement analysis by training. Due to its simplicity and robustness, the framework will provide interactive systems, such as ATR I-cubed Tangible Music System or ATR Interactive Dance system, with improved quality of the 3D interface. The proposed simple framework also can be extended to other applications requiring dynamic gesture tracking and analysis on the fly.

  10. Next generation 3-D OFDM based optical access networks using FEC under various system impairments

    NASA Astrophysics Data System (ADS)

    Kumar, Pravindra; Srivastava, Anand

    2013-12-01

    Passive optical network based on orthogonal frequency division multiplexing (OFDM-PON) exhibits excellent performance in optical access networks due to its greater resistance to fiber dispersion, high spectral efficiency and exibility on both multiple services and dynamic bandwidth allocation. The major elements of conventional OFDM communication system are two-dimensional (2-D) signal mapper and one-dimensional (1-D) inverse fast fourier transform (IFFT). Three dimensional (3-D) OFDM use the concept of 3-D signal mapper and 2-D IFFT. With 3-D OFDM, minimum Euclidean distance (MED) is increased which results in BER performance improvement. As bit error rate (BER) depends on minimum Euclidean distance (MED) which is 15.46 % more in case of 3-D OFDM as compared to 2-D OFDM. Forward error correction (FEC) coding is a technique where redundancy is added to original bit sequence to increase the reliability of communication system. In this paper, we propose and analytically analyze a new PON architecture based on 3-D OFDM with convolutional coding and Viterbi decoding and is compared with conventional 2-D OFDM under various system impairments for coherent optical orthogonal frequency division multiplexing (CO-OFDM) without using any optical dispersion compensation. Analytical result show that at BER of 10-9, there is 2.7 dB, 3.8 dB and 9.3 dB signal-to-noise ratio (SNR) gain with 3-D OFDM, 3-D OFDM combined with convolutional coding and Viterbi hard decision decoding (CC-HDD) and 3-D OFDM combined with convolutional coding and Viterbi soft decision decoding (CC-SDD) respectively as compared to 2-D OFDM-PON. At BER of 10-9, 3-D OFDM-PON with CC-HDD gives 2.8 dB improvement in optical budget for both upstream and downstream path and gives 5.7 dB improvement in optical budget using 3-D OFDM-PON combined with CC-SDD as compared to conventional OFDM-PON system.

  11. High efficient methods of content-based 3D model retrieval

    NASA Astrophysics Data System (ADS)

    Wu, Yuanhao; Tian, Ling; Li, Chenggang

    2013-03-01

    Content-based 3D model retrieval is of great help to facilitate the reuse of existing designs and to inspire designers during conceptual design. However, there is still a gap to apply it in industry due to the low time efficiency. This paper presents two new methods with high efficiency to build a Content-based 3D model retrieval system. First, an improvement is made on the "Shape Distribution (D2)" algorithm, and a new algorithm named "Quick D2" is proposed. Four sample 3D mechanical models are used in an experiment to compare the time cost of the two algorithms. The result indicates that the time cost of Quick D2 is much lower than that of D2, while the descriptors extracted by the two algorithms are almost the same. Second, an expandable 3D model repository index method with high performance, namely, RBK index, is presented. On the basis of RBK index, the search space is pruned effectively during the search process, leading to a speed up of the whole system. The factors that influence the values of the key parameters of RBK index are discussed and an experimental method to find the optimal values of the key parameters is given. Finally, "3D Searcher", a content-based 3D model retrieval system is developed. By using the methods proposed, the time cost for the system to respond one query online is reduced by 75% on average. The system has been implemented in a manufacturing enterprise, and practical query examples during a case of the automobile rear axle design are also shown. The research method presented shows a new research perspective and can effectively improve the content-based 3D model retrieval efficiency.

  12. 2D face database diversification based on 3D face modeling

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-05-01

    Pose and illumination are identified as major problems in 2D face recognition (FR). It has been theoretically proven that the more diversified instances in the training phase, the more accurate and adaptable the FR system appears to be. Based on this common awareness, researchers have developed a large number of photographic face databases to meet the demand for data training purposes. In this paper, we propose a novel scheme for 2D face database diversification based on 3D face modeling and computer graphics techniques, which supplies augmented variances of pose and illumination. Based on the existing samples from identical individuals of the database, a synthesized 3D face model is employed to create composited 2D scenarios with extra light and pose variations. The new model is based on a 3D Morphable Model (3DMM) and genetic type of optimization algorithm. The experimental results show that the complemented instances obviously increase diversification of the existing database.

  13. 3D printed microfluidic circuitry via multijet-based additive manufacturing†

    PubMed Central

    Sochol, R. D.; Sweet, E.; Glick, C. C.; Venkatesh, S.; Avetisyan, A.; Ekman, K. F.; Raulinaitis, A.; Tsai, A.; Wienkers, A.; Korner, K.; Hanson, K.; Long, A.; Hightower, B. J.; Slatton, G.; Burnett, D. C.; Massey, T. L.; Iwai, K.; Lee, L. P.; Pister, K. S. J.; Lin, L.

    2016-01-01

    The miniaturization of integrated fluidic processors affords extensive benefits for chemical and biological fields, yet traditional, monolithic methods of microfabrication present numerous obstacles for the scaling of fluidic operators. Recently, researchers have investigated the use of additive manufacturing or “three-dimensional (3D) printing” technologies – predominantly stereolithography – as a promising alternative for the construction of submillimeter-scale fluidic components. One challenge, however, is that current stereolithography methods lack the ability to simultaneously print sacrificial support materials, which limits the geometric versatility of such approaches. In this work, we investigate the use of multijet modelling (alternatively, polyjet printing) – a layer-by-layer, multi-material inkjetting process – for 3D printing geometrically complex, yet functionally advantageous fluidic components comprised of both static and dynamic physical elements. We examine a fundamental class of 3D printed microfluidic operators, including fluidic capacitors, fluidic diodes, and fluidic transistors. In addition, we evaluate the potential to advance on-chip automation of integrated fluidic systems via geometric modification of component parameters. Theoretical and experimental results for 3D fluidic capacitors demonstrated that transitioning from planar to non-planar diaphragm architectures improved component performance. Flow rectification experiments for 3D printed fluidic diodes revealed a diodicity of 80.6 ± 1.8. Geometry-based gain enhancement for 3D printed fluidic transistors yielded pressure gain of 3.01 ± 0.78. Consistent with additional additive manufacturing methodologies, the use of digitally-transferrable 3D models of fluidic components combined with commercially-available 3D printers could extend the fluidic routing capabilities presented here to researchers in fields beyond the core engineering community. PMID:26725379

  14. 3D printed microfluidic circuitry via multijet-based additive manufacturing.

    PubMed

    Sochol, R D; Sweet, E; Glick, C C; Venkatesh, S; Avetisyan, A; Ekman, K F; Raulinaitis, A; Tsai, A; Wienkers, A; Korner, K; Hanson, K; Long, A; Hightower, B J; Slatton, G; Burnett, D C; Massey, T L; Iwai, K; Lee, L P; Pister, K S J; Lin, L

    2016-02-21

    The miniaturization of integrated fluidic processors affords extensive benefits for chemical and biological fields, yet traditional, monolithic methods of microfabrication present numerous obstacles for the scaling of fluidic operators. Recently, researchers have investigated the use of additive manufacturing or "three-dimensional (3D) printing" technologies - predominantly stereolithography - as a promising alternative for the construction of submillimeter-scale fluidic components. One challenge, however, is that current stereolithography methods lack the ability to simultaneously print sacrificial support materials, which limits the geometric versatility of such approaches. In this work, we investigate the use of multijet modelling (alternatively, polyjet printing) - a layer-by-layer, multi-material inkjetting process - for 3D printing geometrically complex, yet functionally advantageous fluidic components comprised of both static and dynamic physical elements. We examine a fundamental class of 3D printed microfluidic operators, including fluidic capacitors, fluidic diodes, and fluidic transistors. In addition, we evaluate the potential to advance on-chip automation of integrated fluidic systems via geometric modification of component parameters. Theoretical and experimental results for 3D fluidic capacitors demonstrated that transitioning from planar to non-planar diaphragm architectures improved component performance. Flow rectification experiments for 3D printed fluidic diodes revealed a diodicity of 80.6 ± 1.8. Geometry-based gain enhancement for 3D printed fluidic transistors yielded pressure gain of 3.01 ± 0.78. Consistent with additional additive manufacturing methodologies, the use of digitally-transferrable 3D models of fluidic components combined with commercially-available 3D printers could extend the fluidic routing capabilities presented here to researchers in fields beyond the core engineering community. PMID:26725379

  15. 3D finite element analysis of porous Ti-based alloy prostheses.

    PubMed

    Mircheski, Ile; Gradišar, Marko

    2016-11-01

    In this paper, novel designs of porous acetabular cups are created and tested with 3D finite element analysis (FEA). The aim is to develop a porous acetabular cup with low effective radial stiffness of the structure, which will be near to the architectural and mechanical behavior of the natural bone. For the realization of this research, a 3D-scanner technology was used for obtaining a 3D-CAD model of the pelvis bone, a 3D-CAD software for creating a porous acetabular cup, and a 3D-FEA software for virtual testing of a novel design of the porous acetabular cup. The results obtained from this research reveal that a porous acetabular cup from Ti-based alloys with 60 ± 5% porosity has the mechanical behavior and effective radial stiffness (Young's modulus in radial direction) that meet and exceed the required properties of the natural bone. The virtual testing with 3D-FEA of a novel design with porous structure during the very early stage of the design and the development of orthopedic implants, enables obtaining a new or improved biomedical implant for a relatively short time and reduced price. PMID:27015664

  16. Error analysis of a 3D imaging system based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Dai, Jie

    2013-12-01

    In the past few years, optical metrology has found numerous applications in scientific and commercial fields owing to its non-contact nature. One of the most popular methods is the measurement of 3D surface based on fringe projection techniques because of the advantages of non-contact operation, full-field and fast acquisition and automatic data processing. In surface profilometry by using digital light processing (DLP) projector, many factors affect the accuracy of 3D measurement. However, there is no research to give the complete error analysis of a 3D imaging system. This paper will analyze some possible error sources of a 3D imaging system, for example, nonlinear response of CCD camera and DLP projector, sampling error of sinusoidal fringe pattern, variation of ambient light and marker extraction during calibration. These error sources are simulated in a software environment to demonstrate their effects on measurement. The possible compensation methods are proposed to give high accurate shape data. Some experiments were conducted to evaluate the effects of these error sources on 3D shape measurement. Experimental results and performance evaluation show that these errors have great effect on measuring 3D shape and it is necessary to compensate for them for accurate measurement.

  17. A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision

    NASA Astrophysics Data System (ADS)

    Tsai, Yuan-Yu

    2016-03-01

    Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.

  18. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  19. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  20. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  1. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  2. A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Ito, Takaaki; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2014-03-01

    This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.

  3. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  4. 3D model retrieval using probability density-based shape descriptors.

    PubMed

    Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis

    2009-06-01

    We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories. PMID:19372614

  5. Shape-based 3D vascular tree extraction for perforator flaps

    NASA Astrophysics Data System (ADS)

    Wen, Quan; Gao, Jean

    2005-04-01

    Perforator flaps have been increasingly used in the past few years for trauma and reconstructive surgical cases. With the thinned perforated flaps, greater survivability and decrease in donor site morbidity have been reported. Knowledge of the 3D vascular tree will provide insight information about the dissection region, vascular territory, and fascia levels. This paper presents a scheme of shape-based 3D vascular tree reconstruction of perforator flaps for plastic surgery planning, which overcomes the deficiencies of current existing shape-based interpolation methods by applying rotation and 3D repairing. The scheme has the ability to restore the broken parts of the perforator vascular tree by using a probability-based adaptive connection point search (PACPS) algorithm with minimum human intervention. The experimental results evaluated by both synthetic and 39 harvested cadaver perforator flaps show the promise and potential of proposed scheme for plastic surgery planning.

  6. A research of 3D gravity inversion based on the recovery of sparse underdetermined linear equations

    NASA Astrophysics Data System (ADS)

    Zhaohai, M.

    2014-12-01

    Because of the properties of gravity data, it is made difficult to solve the problem of multiple solutions. There are two main types of 3D gravity inversion methods:One of two methods is based on the improvement of the instability of the sensitive matrix, solving the problem of multiple solutions and instability in 3D gravity inversion. Another is to join weight function into the 3D gravity inversion iteration. Through constant iteration, it can renewal density values and weight function to achieve the purpose to solve the multiple solutions and instability of the 3D gravity data inversion. Thanks to the sparse nature of the solutions of 3D gravity data inversions, we can transform it into a sparse equation. Then, through solving the sparse equations, we can get perfect 3D gravity inversion results. The main principle is based on zero norm of sparse matrix solution of the equation. Zero norm is mainly to solve the nonzero solution of the sparse matrix. However, the method of this article adopted is same as the principle of zero norm. But the method is the opposite of zero norm to obtain zero value solution. Through the form of a Gaussian fitting solution of the zero norm, we can find the solution by using regularization principle. Moreover, this method has been proved that it had a certain resistance to random noise in the mathematics, and it was more suitable than zero norm for the solution of the geophysical data. 3D gravity which is adopted in this article can well identify abnormal body density distribution characteristics, and it can also recognize the space position of abnormal distribution very well. We can take advantage of the density of the upper and lower limit penalty function to make each rectangular residual density within a reasonable range. Finally, this 3D gravity inversion is applied to a variety of combination model test, such as a single straight three-dimensional model, the adjacent straight three-dimensional model and Y three

  7. Manifold Based Optimization for Single-Cell 3D Genome Reconstruction

    PubMed Central

    Collas, Philippe

    2015-01-01

    The three-dimensional (3D) structure of the genome is important for orchestration of gene expression and cell differentiation. While mapping genomes in 3D has for a long time been elusive, recent adaptations of high-throughput sequencing to chromosome conformation capture (3C) techniques, allows for genome-wide structural characterization for the first time. However, reconstruction of "consensus" 3D genomes from 3C-based data is a challenging problem, since the data are aggregated over millions of cells. Recent single-cell adaptations to the 3C-technique, however, allow for non-aggregated structural assessment of genome structure, but data suffer from sparse and noisy interaction sampling. We present a manifold based optimization (MBO) approach for the reconstruction of 3D genome structure from chromosomal contact data. We show that MBO is able to reconstruct 3D structures based on the chromosomal contacts, imposing fewer structural violations than comparable methods. Additionally, MBO is suitable for efficient high-throughput reconstruction of large systems, such as entire genomes, allowing for comparative studies of genomic structure across cell-lines and different species. PMID:26262780

  8. Development of 3-D fracture network visualization software based on graphical user interface

    NASA Astrophysics Data System (ADS)

    Young-Hwan, Noh; Jeong-Gi, Um; Yosoon, Choi; Myong-Ho, Park; Jaeyoung, Choi

    2013-04-01

    A sound understanding of the structural characteristics of fractured rock masses is important in designing and maintaining earth structures because their strength, deformability, and hydraulic behavior depend mainly on the characteristics of discontinuity network structures. Despite considerable progress in understanding the structural characteristics of rock masses, the complexity of discontinuity patterns has prevented satisfactory analysis based on a 3-D rock mass visualization model. This research presents the results of studies performed to develop rock mass visualization in 3-D to analysis the mechanical and hydraulic behavior of fractured rock masses. General and particular solutions of non-linear equations of disk-shaped fractures have been derived to calculated lines of intersection and equivalent pipes. Also, program modules of DISK3D, FNTWK3D, BOUNDARY and BDM(borehole data management) have been developed to perform the visualization of fracture network and corresponding equivalent pipes for DFN based fluid flow model. The developed software for the 3-D fractured rock mass visualization model based on MS visual studio can be used to characterize rock mass geometry and network systems effectively. The results obtained in this study will be refined and then combined for use as a tool for assessing geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses. Acknowledgements. This work was supported by the 2011 Energy Efficiency and Resources Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant.

  9. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  10. Structuring Narrative in 3D Digital Game-Based Learning Environments to Support Second Language Acquisition

    ERIC Educational Resources Information Center

    Neville, David O.

    2010-01-01

    The essay is a conceptual analysis from an instructional design perspective exploring the feasibility of using three-dimensional digital game-based learning (3D-DGBL) environments to assist in second language acquisition (SLA). It examines the shared characteristics of narrative within theories of situated cognition, context-based approaches to…

  11. 3D Game-Based Learning System for Improving Learning Achievement in Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Su,Chung-Ho; Cheng, Ching-Hsue

    2013-01-01

    The advancement of game-based learning has encouraged many related studies, such that students could better learn curriculum by 3-dimension virtual reality. To enhance software engineering learning, this paper develops a 3D game-based learning system to assist teaching and assess the students' motivation, satisfaction and learning…

  12. Study on Construction of 3d Building Based on Uav Images

    NASA Astrophysics Data System (ADS)

    Xie, F.; Lin, Z.; Gui, D.; Lin, H.

    2012-07-01

    Based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D)city modeling, a method of fast 3D building modeling using the images from UAV carrying four-combined camera is studied. Firstly, by contrasting and analyzing the mosaic structures of the existing four-combined cameras, a new type of four-combined camera with special design of overlap images is designed, which improves the self-calibration function to achieve the high precision imaging by automatically eliminating the error of machinery deformation and the time lag with every exposure, and further reduce the weight of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of building surfaces and the texture extraction. Finally, two tests that are aerial photography with large scale mapping of 1:1000 and 3D building construction in Shandong University of Science and Technology and aerial photography with large scale mapping of 1:500 and 3D building construction in Henan University of Urban Construction, provide authentication model for construction of 3D building based on combined wide-angle camera images from UAV system. It is demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D building production, and the technology solution in this paper offers a new, fast and technical plan for the 3D expression of the city landscape, fine modeling and visualization.

  13. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    PubMed

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM. PMID:27277277

  14. SOFI-based 3D superresolution sectioning with a widefield microscope

    PubMed Central

    Dertinger, Thomas; Xu, Jianmin; Naini, Omeed Foroutan; Vogel, Robert; Weiss, Shimon

    2013-01-01

    Background Fluorescence-based biological imaging has been revolutionized by the recent introduction of superresolution microscopy methods. 3D superresolution microscopy, however, remains a challenge as its implementation by existing superresolution methods is non-trivial. Methods Here we demonstrate a facile and straightforward 3D superresolution imaging and sectioning of the cytoskeletal network of a fixed cell using superresolution optical fluctuation imaging (SOFI) performed on a conventional lamp-based widefield microscope. Results and Conclusion SOFI’s inherent sectioning capability effectively transforms a conventional widefield microscope into a superresolution ‘confocal widefield’ microscope. PMID:24163789

  15. 3D printing of weft knitted textile based structures by selective laser sintering of nylon powder

    NASA Astrophysics Data System (ADS)

    Beecroft, M.

    2016-07-01

    3D printing is a form of additive manufacturing whereby the building up of layers of material creates objects. The selective laser sintering process (SLS) uses a laser beam to sinter powdered material to create objects. This paper builds upon previous research into 3D printed textile based material exploring the use of SLS using nylon powder to create flexible weft knitted structures. The results show the potential to print flexible textile based structures that exhibit the properties of traditional knitted textile structures along with the mechanical properties of the material used, whilst describing the challenges regarding fineness of printing resolution. The conclusion highlights the potential future development and application of such pieces.

  16. Carbon nanotube based 3-D matrix for enabling three-dimensional nano-magneto-electronics [corrected].

    PubMed

    Hong, Jeongmin; Stefanescu, Eugenia; Liang, Ping; Joshi, Nikhil; Xue, Song; Litvinov, Dmitri; Khizroev, Sakhrat

    2012-01-01

    This letter describes the use of vertically aligned carbon nanotubes (CNT)-based arrays with estimated 2-nm thick cobalt (Co) nanoparticles deposited inside individual tubes to unravel the possibility of using the unique templates for ultra-high-density low-energy 3-D nano-magneto-electronic devices. The presence of oriented 2-nm thick Co layers within individual nanotubes in the CNT-based 3-D matrix is confirmed through VSM measurements as well as an energy-dispersive X-ray spectroscopy (EDS). PMID:22808192

  17. Laser nanostructuring 3-D bioconstruction based on carbon nanotubes in a water matrix of albumin

    NASA Astrophysics Data System (ADS)

    Gerasimenko, Alexander Y.; Ichkitidze, Levan P.; Podgaetsky, Vitaly M.; Savelyev, Mikhail S.; Selishchev, Sergey V.

    2016-04-01

    3-D bioconstructions were created using the evaporation method of the water-albumin solution with carbon nanotubes (CNTs) by the continuous and pulsed femtosecond laser radiation. It is determined that the volume structure of the samples created by the femtosecond radiation has more cavities than the one created by the continuous radiation. The average diameter for multi-walled carbon nanotubes (MWCNTs) samples was almost two times higher (35-40 nm) than for single-walled carbon nanotubes (SWCNTs) samples (20-30 nm). The most homogenous 3-D bioconstruction was formed from MWCNTs by the continuous laser radiation. The hardness of such samples totaled up to 370 MPa at the nanoscale. High strength properties and the resistance of the 3-D bioconstructions produced by the laser irradiation depend on the volume nanotubes scaffold forming inside them. The scaffold was formed by the electric field of the directed laser irradiation. The covalent bond energy between the nanotube carbon molecule and the oxygen of the bovine serum albumin aminoacid residue amounts 580 kJ/mol. The 3-D bioconstructions based on MWCNTs and SWCNTs becomes overgrown with the cells (fibroblasts) over the course of 72 hours. The samples based on the both types of CNTs are not toxic for the cells and don't change its normal composition and structure. Thus the 3-D bioconstructions that are nanostructured by the pulsed and continuous laser radiation can be applied as implant materials for the recovery of the connecting tissues of the living body.

  18. Relevance of PEG in PLA-based blends for tissue engineering 3D-printed scaffolds.

    PubMed

    Serra, Tiziano; Ortiz-Hernandez, Monica; Engel, Elisabeth; Planell, Josep A; Navarro, Melba

    2014-05-01

    Achieving high quality 3D-printed structures requires establishing the right printing conditions. Finding processing conditions that satisfy both the fabrication process and the final required scaffold properties is crucial. This work stresses the importance of studying the outcome of the plasticizing effect of PEG on PLA-based blends used for the fabrication of 3D-direct-printed scaffolds for tissue engineering applications. For this, PLA/PEG blends with 5, 10 and 20% (w/w) of PEG and PLA/PEG/bioactive CaP glass composites were processed in the form of 3D rapid prototyping scaffolds. Surface analysis and differential scanning calorimetry revealed a rearrangement of polymer chains and a topography, wettability and elastic modulus increase of the studied surfaces as PEG was incorporated. Moreover, addition of 10 and 20% PEG led to non-uniform 3D structures with lower mechanical properties. In vitro degradation studies showed that the inclusion of PEG significantly accelerated the degradation rate of the material. Results indicated that the presence of PEG not only improves PLA processing but also leads to relevant surface, geometrical and structural changes including modulation of the degradation rate of PLA-based 3D printed scaffolds. PMID:24656352

  19. Computer-Assisted Hepatocellular Carcinoma Ablation Planning Based on 3-D Ultrasound Imaging.

    PubMed

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Guan, Peishan; Li, Liu-Jun; Zheng, Rongqin

    2016-08-01

    To evaluate computer-assisted hepatocellular carcinoma (HCC) ablation planning based on 3-D ultrasound, 3-D ultrasound images of 60 HCC lesions from 58 patients were obtained and transferred to a research toolkit. Compared with virtual manual ablation planning (MAP), virtual computer-assisted ablation planning (CAP) consumed less time and needle insertion numbers and exhibited a higher rate of complete tumor coverage and lower rate of critical structure injury. In MAP, junior operators used less time, but had more critical structure injury than senior operators. For large lesions, CAP performed better than MAP. For lesions near critical structures, CAP resulted in better outcomes than MAP. Compared with MAP, CAP based on 3-D ultrasound imaging was more effective and achieved a higher rate of complete tumor coverage and a lower rate of critical structure injury; it is especially useful for junior operators and with large lesions, and lesions near critical structures. PMID:27126243

  20. A web-based 3D medical image collaborative processing system with videoconference

    NASA Astrophysics Data System (ADS)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  1. Reflected-light-source-based three-dimensional display with high brightness.

    PubMed

    Lv, Guo-Jiao; Wu, Fei; Zhao, Wu-Xiang; Fan, Jun; Zhao, Bai-Chuan; Wang, Qiong-Hua

    2016-05-01

    A reflected-light-source (RLS)-based 3D display is proposed. This display consists of an RLS and a 2D display panel. The 2D display panel is located in front of the RLS. The RLS consists of a light source, a light guide plate (LGP), and a reflection cavity. The light source and the LGP are located in the reflection cavity. Light from the light source can enter into the LGP and reflect continuously in the reflection cavity. The reflection cavity has a series of slits, and light can exit only from these slits. These slits can work as a postpositional parallax barrier, so when they modulate the parallax images on the 2D display, 3D images are formed. Different from the conventional 3D display based on a parallax barrier, this RLS has less optical loss, so it can provide higher brightness. A prototype of this display is developed. Experimental results show that this RLS-based 3D display can provide higher brightness than the conventional one. PMID:27140355

  2. Stereo-vision based 3D modeling for unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Jasiobedzki, Piotr

    2007-04-01

    Instant Scene Modeler (iSM) is a vision system for generating calibrated photo-realistic 3D models of unknown environments quickly using stereo image sequences. Equipped with iSM, Unmanned Ground Vehicles (UGVs) can capture stereo images and create 3D models to be sent back to the base station, while they explore unknown environments. Rapid access to 3D models will increase the operator situational awareness and allow better mission planning and execution, as the models can be visualized from different views and used for relative measurements. Current military operations of UGVs in urban warfare threats involve the operator hand-sketching the environment from live video feed. iSM eliminates the need for an additional operator as the 3D model is generated automatically. The photo-realism of the models enhances the situational awareness of the mission and the models can also be used for change detection. iSM has been tested on our autonomous vehicle to create photo-realistic 3D models while the rover traverses in unknown environments. Moreover, a proof-of-concept iSM payload has been mounted on an iRobot PackBot with Wayfarer technology, which is equipped with autonomous urban reconnaissance capabilities. The Wayfarer PackBot UGV uses wheel odometry for localization and builds 2D occupancy grid maps from a laser sensor. While the UGV is following walls and avoiding obstacles, iSM captures and processes images to create photo-realistic 3D models. Experimental results show that iSM can complement Wayfarer PackBot's autonomous navigation in two ways. The photo-realistic 3D models provide better situational awareness than 2D grid maps. Moreover, iSM also recovers the camera motion, also known as the visual odometry. As wheel odometry error grows over time, this can help improve the wheel odometry for better localization.

  3. Ground and Structure Deformation 3d Modelling with a Tin Based Property Model

    NASA Astrophysics Data System (ADS)

    TIAN, T.; Zhang, J.; Jiang, W.

    2013-12-01

    With the development of 3D( three-dimensional) modeling and visualization, more and more 3D tectonics are used to assist the daily work in Engineering Survey, in which the prediction of deformation field in strata and structure induced by underground construction is an essential part. In this research we developed a TIN (Triangulated Irregular Network) based property model for the 3D (three dimensional) visualization of ground deformation filed. By record deformation vector for each nodes, the new model can express the deformation with geometric-deformation-style by drawing each node in its new position and deformation-attribute-distribution-style by drawing each node in the color correspond with its deformation attribute at the same time. Comparing with the volume model based property model, this new property model can provide a more precise geometrical shape for structure objects. Furthermore, by recording only the deformation data of the user-interested 3d surface- such as the ground surface or the underground digging surface, the new property model can save a lot of space, which makes it possible to build the deformation filed model of a much more large scale. To construct the models of deformation filed based on TIN model, the refinement of the network is needed to increase the nodes number, which is necessary to express the deformation filed with a certain resolution. The TIN model refinement is a process of sampling the 3D deformation field values on points on the TIN surface, for which we developed a self-adapting TIN refinement method. By set the parameter of the attribute resolution, this self-adapting method refines the input geometric-expressing TIN model by adding more vertexes and triangles where the 3D deformation filed changing faster. Comparing with the even refinement method, the self-adapting method can generate a refined TIN model with nodes counted less by two thirds. Efficiency Comparison between Self-adapting Refinement Method and Even

  4. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    NASA Astrophysics Data System (ADS)

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  5. 3D watershed-based segmentation of internal structures within MR brain images

    NASA Astrophysics Data System (ADS)

    Bueno, Gloria; Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul

    2000-06-01

    In this paper an image-based method founded on mathematical morphology is presented in order to facilitate the segmentation of cerebral structures on 3D magnetic resonance images (MRIs). The segmentation is described as an immersion simulation, applied to the modified gradient image, modeled by a generated 3D region adjacency graph (RAG). The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions are identified in order to attribute an influence zone only to relevant minima of the image. This stage uses contrasted regions from morphological reconstruction and labeled flat regions constrained by the RAG. The goal of the decision stage is to precisely locate the contours of regions detected by the marker extraction. This decision is performed by a 3D extension of the watershed transform. Upon completion of the segmentation, the outcome of the preceding process is presented to the user for manual selection of the structures of interest (SOI). Results of this approach are described and illustrated with examples of segmented 3D MRIs of the human head.

  6. Electro-bending characterization of adaptive 3D fiber reinforced plastics based on shape memory alloys

    NASA Astrophysics Data System (ADS)

    Ashir, Moniruddoza; Hahn, Lars; Kluge, Axel; Nocke, Andreas; Cherif, Chokri

    2016-03-01

    The industrial importance of fiber reinforced plastics (FRPs) is growing steadily in recent years, which are mostly used in different niche products, has been growing steadily in recent years. The integration of sensors and actuators in FRP is potentially valuable for creating innovative applications and therefore the market acceptance of adaptive FRP is increasing. In particular, in the field of highly stressed FRP, structural integrated systems for continuous component parts monitoring play an important role. This presented work focuses on the electro-mechanical characterization of adaptive three-dimensional (3D)FRP with integrated textile-based actuators. Here, the friction spun hybrid yarn, consisting of shape memory alloy (SMA) in wire form as core, serves as an actuator. Because of the shape memory effect, the SMA-hybrid yarn returns to its original shape upon heating that also causes the deformation of adaptive 3D FRP. In order to investigate the influences of the deformation behavior of the adaptive 3D FRP, investigations in this research are varied according to the structural parameters such as radius of curvature of the adaptive 3D FRP, fabric types and number of layers of the fabric in the composite. Results show that reproducible deformations can be realized with adaptive 3D FRP and that structural parameters have a significant impact on the deformation capability.

  7. 3D resolution enhancement of deep-tissue imaging based on virtual spatial overlap modulation microscopy.

    PubMed

    Su, I-Cheng; Hsu, Kuo-Jen; Shen, Po-Ting; Lin, Yen-Yin; Chu, Shi-Wei

    2016-07-25

    During the last decades, several resolution enhancement methods for optical microscopy beyond diffraction limit have been developed. Nevertheless, those hardware-based techniques typically require strong illumination, and fail to improve resolution in deep tissue. Here we develop a high-speed computational approach, three-dimensional virtual spatial overlap modulation microscopy (3D-vSPOM), which immediately solves the strong-illumination issue. By amplifying only the spatial frequency component corresponding to the un-scattered point-spread-function at focus, plus 3D nonlinear value selection, 3D-vSPOM shows significant resolution enhancement in deep tissue. Since no iteration is required, 3D-vSPOM is much faster than iterative deconvolution. Compared to non-iterative deconvolution, 3D-vSPOM does not need a priori information of point-spread-function at deep tissue, and provides much better resolution enhancement plus greatly improved noise-immune response. This method is ready to be amalgamated with two-photon microscopy or other laser scanning microscopy to enhance deep-tissue resolution. PMID:27464077

  8. Video reframing relying on panoramic estimation based on a 3D representation of the scene

    NASA Astrophysics Data System (ADS)

    de Simon, Agnes; Figue, Jean; Nicolas, Henri

    2000-05-01

    This paper describes a new method for creating mosaic images from an original video and for computing a new sequence modifying some camera parameters like image size, scale factor, view angle... A mosaic image is a representation of the full scene observed by a moving camera during its displacement. It provides a wide angle of view of the scene from a sequence of images shot with a narrow angle of view camera. This paper proposes a method to create a virtual sequence from a calibrated original video and a rough 3D model of the scene. A 3D relationship between original and virtual images gives pixel correspondent in different images for a same 3D point in model scene. To texture the model with natural textures obtained in the original sequence, a criterion based on constraints related to the temporal variations of the background and 3D geometric considerations is used. Finally, in the presented method, the textured 3D model is used to recompute a new sequence of image with possibly different point of view and camera aperture angle. The algorithm is being proven with virtual sequences and, obtained results are encouraging up to now.

  9. The 3-D world modeling with updating capability based on combinatorial geometry

    NASA Technical Reports Server (NTRS)

    Goldstein, M.; Pin, F. G.; Desaussure, G.; Weisbin, C. R.

    1987-01-01

    A 3-D world modeling technique using range data is discribed. Range data quantify the distances from the sensor focal plane to the object surface, i.e., the 3-D coordinates of discrete points on the object surface are known. The approach proposed herein for 3-D world modeling is based on the Combinatorial Geometry (CG) method which is widely used in Monte Carlo particle transport calculations. First, each measured point on the object surface is surrounded by a small sphere with a radius determined by the range to that point. Then, the 3-D shapes of the visible surfaces are obtained by taking the (Boolean) union of all the spheres. The result is an unambiguous representation of the object's boundary surfaces. The pre-learned partial knowledge of the environment can be also represented using the CG Method with a relatively small amount of data. Using the CG type of representation, distances in desired directions to boundary surfaces of various objects are efficiently calculated. This feature is particularly useful for continuously verifying the world model against the data provided by a range finder, and for integrating range data from successive locations of the robot during motion. The efficiency of the proposed approach is illustrated by simulations of a spherical robot in a 3-D room in the presence of moving obstacles and inadequate prelearned partial knowledge of the environment.

  10. A first approach of 3D Geostrophic Currents based on GOCE, altimetry and ARGO data

    NASA Astrophysics Data System (ADS)

    Sempere Beneyto, M. Dolores; Vigo, Isabel; Chao, Ben F.

    2016-04-01

    The most recent advances in the geoid determination, provided by the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission, together with the continuous monitoring of the sea surface height by the altimeters on board of satellites and Argo data makes possible to estimate the ocean geostrophy in 3D. In this work, we present a first approach of the 3D geostrophic circulation for North Atlantic region, from the surface down to 1500 m depth. It has been computed for a 10 years period (2004-2014), using an observation-based approach that combines altimetry with temperature and salinity through the thermal wind equation gridded at one degree longitude and latitude resolution. For validation of the results, the estimated 3D geostrophic circulation is compared with Ocean Circulation Models simulations and/or in-situ data, showing in all cases similar patterns.

  11. Microfluidic 3D cell culture: potential application for tissue-based bioassays

    PubMed Central

    Li, XiuJun (James); Valadez, Alejandra V.; Zuo, Peng; Nie, Zhihong

    2014-01-01

    Current fundamental investigations of human biology and the development of therapeutic drugs, commonly rely on two-dimensional (2D) monolayer cell culture systems. However, 2D cell culture systems do not accurately recapitulate the structure, function, physiology of living tissues, as well as highly complex and dynamic three-dimensional (3D) environments in vivo. The microfluidic technology can provide micro-scale complex structures and well-controlled parameters to mimic the in vivo environment of cells. The combination of microfluidic technology with 3D cell culture offers great potential for in vivo-like tissue-based applications, such as the emerging organ-on-a-chip system. This article will review recent advances in microfluidic technology for 3D cell culture and their biological applications. PMID:22793034

  12. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  13. Radon transform based automatic metal artefacts generation for 3D threat image projection

    NASA Astrophysics Data System (ADS)

    Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.; Mouton, Andre

    2013-10-01

    Threat Image Projection (TIP) plays an important role in aviation security. In order to evaluate human security screeners in determining threats, TIP systems project images of realistic threat items into the images of the passenger baggage being scanned. In this proof of concept paper, we propose a 3D TIP method which can be integrated within new 3D Computed Tomography (CT) screening systems. In order to make the threat items appear as if they were genuinely located in the scanned bag, appropriate CT metal artefacts are generated in the resulting TIP images according to the scan orientation, the passenger bag content and the material of the inserted threat items. This process is performed in the projection domain using a novel methodology based on the Radon Transform. The obtained results using challenging 3D CT baggage images are very promising in terms of plausibility and realism.

  14. Development of an indirect solid freeform fabrication process based on microstereolithography for 3D porous scaffolds

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Wook; Seol, Young-Joon; Cho, Dong-Woo

    2009-01-01

    Scaffold fabrication using solid freeform fabrication (SFF) technology is a hot topic in tissue engineering. Here, we present a new indirect SFF technology based on microstereolithography (MSTL), which has the highest resolution of all SFF methods, to construct a three-dimensional (3D) porous scaffold by combining SFF with molding technology. To realize this indirect method, we investigated and modified a water-soluble photopolymer. We used MSTL technology to fabricate a high-resolution 3D porous mold composed of the modified polymer. The mold can be removed using an appropriate solvent. We tested two materials, polycaprolactone and calcium sulfate hemihydrate, using the molding process, and developed a lost-mold shape forming process by dissolving the mold. This procedure demonstrated that the proposed method can yield scaffold pore sizes as small as 60-70 µm. In addition, cytotoxicity test results indicated that the proposed process is feasible for producing 3D porous scaffolds.

  15. Landmark detection from 3D mesh facial models for image-based analysis of dysmorphology.

    PubMed

    Chendeb, Marwa; Tortorici, Claudio; Al Muhairi, Hassan; Al Safar, Habiba; Linguraru, Marius; Werghi, Naoufel

    2015-01-01

    Facial landmark detection is a task of interest for facial dysmorphology, an important factor in the diagnosis of genetic conditions. In this paper, we propose a framework for feature points detection from 3D face images. The method is based on 3D Constrained Local Model (CLM) which learns both global variations in the 3D facial scan and local changes around every vertex landmark. Compared to state of the art methods our framework is distinguished by the following novel aspects: 1) It operates on facial surfaces, 2) It allows fusion of shape and color information on the mesh surface, 3) It introduces the use of LBP descriptors on the mesh. We showcase our landmarks detection framework on a set of scans including down syndrome and control cases. We also validate our method through a series of quantitative experiments conducted with the publicly available Bosphorus database. PMID:26736227

  16. FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

    2005-02-01

    Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

  17. Phenotyping transgenic embryos: a rapid 3-D screening method based on episcopic fluorescence image capturing.

    PubMed

    Weninger, Wolfgang Johann; Mohun, Timothy

    2002-01-01

    We describe a technique suitable for routine three-dimensional (3-D) analysis of mouse embryos that is based on episcopic fluorescence images captured during serial sectioning of wax-embedded specimens. We have used this procedure to describe the cardiac phenotype and associated blood vessels of trisomic 16 (Ts16) and Cited2-null mutant mice, as well as the expression pattern of an Myf5 enhancer/beta-galactosidase transgene. The consistency of the images and their precise alignment are ideally suited for 3-D analysis using video animations, virtual resectioning or commercial 3-D reconstruction software packages. Episcopic fluorescence image capturing (EFIC) provides a simple and powerful tool for analyzing embryo and organ morphology in normal and transgenic embryos. PMID:11743576

  18. Organ printing: computer-aided jet-based 3D tissue engineering.

    PubMed

    Mironov, Vladimir; Boland, Thomas; Trusk, Thomas; Forgacs, Gabor; Markwald, Roger R

    2003-04-01

    Tissue engineering technology promises to solve the organ transplantation crisis. However, assembly of vascularized 3D soft organs remains a big challenge. Organ printing, which we define as computer-aided, jet-based 3D tissue-engineering of living human organs, offers a possible solution. Organ printing involves three sequential steps: pre-processing or development of "blueprints" for organs; processing or actual organ printing; and postprocessing or organ conditioning and accelerated organ maturation. A cell printer that can print gels, single cells and cell aggregates has been developed. Layer-by-layer sequentially placed and solidified thin layers of a thermo-reversible gel could serve as "printing paper". Combination of an engineering approach with the developmental biology concept of embryonic tissue fluidity enables the creation of a new rapid prototyping 3D organ printing technology, which will dramatically accelerate and optimize tissue and organ assembly. PMID:12679063

  19. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    PubMed Central

    Pouke, Matti; Häkkilä, Jonna

    2013-01-01

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747

  20. Multimedia software design of automobile construction based on 3D engine

    NASA Astrophysics Data System (ADS)

    Xu, Guo-dong; Chi, Xiao-xia

    2013-03-01

    This paper introduces the methods of three-dimensional modeling, assembling and simulating design of an automobile based on 3D engine, Pro/Engineer and 3DSMax. Research is also carried out on the order and the route of virtual assembling as well the corresponding processes.

  1. A Bioactive Carbon Nanotube-Based Ink for Printing 2D and 3D Flexible Electronics.

    PubMed

    Shin, Su Ryon; Farzad, Raziyeh; Tamayol, Ali; Manoharan, Vijayan; Mostafalu, Pooria; Zhang, Yu Shrike; Akbari, Mohsen; Jung, Sung Mi; Kim, Duckjin; Comotto, Mattia; Annabi, Nasim; Al-Hazmi, Faten Ebrahim; Dokmeci, Mehmet R; Khademhosseini, Ali

    2016-05-01

    The development of electrically conductive carbon nanotube-based inks is reported. Using these inks, 2D and 3D structures are printed on various flexible substrates such as paper, hydrogels, and elastomers. The printed patterns have mechanical and electrical properties that make them beneficial for various biological applications. PMID:26915715

  2. Selective synthesis of rhodium-based nanoframe catalysts by chemical etching of 3d metals.

    PubMed

    Zhang, Zhi-Ping; Zhu, Wei; Yan, Chun-Hua; Zhang, Ya-Wen

    2015-03-01

    We demonstrate a general strategy for the highly selective synthesis of Rh-based multi-metallic nanoframes through preferential etching of 3d metals, including Cu and Ni. Compared with Rh-Cu nanooctahedrons/C, Rh-Cu nanooctahedral frames/C show greatly enhanced activity toward hydrazine decomposition at room temperature. PMID:25665751

  3. Impact of 3-D printed PLA- and chitosan-based scaffolds on human monocyte/macrophage responses: unraveling the effect of 3-D structures on inflammation.

    PubMed

    Almeida, Catarina R; Serra, Tiziano; Oliveira, Marta I; Planell, Josep A; Barbosa, Mário A; Navarro, Melba

    2014-02-01

    Recent studies have pointed towards a decisive role of inflammation in triggering tissue repair and regeneration, while at the same time it is accepted that an exacerbated inflammatory response may lead to rejection of an implant. Within this context, understanding and having the capacity to regulate the inflammatory response elicited by 3-D scaffolds aimed for tissue regeneration is crucial. This work reports on the analysis of the cytokine profile of human monocytes/macrophages in contact with biodegradable 3-D scaffolds with different surface properties, architecture and controlled pore geometry, fabricated by 3-D printing technology. Fabrication processes were optimized to create four different 3-D platforms based on polylactic acid (PLA), PLA/calcium phosphate glass or chitosan. Cytokine secretion and cell morphology of human peripheral blood monocytes allowed to differentiate on the different matrices were analyzed. While all scaffolds supported monocyte/macrophage adhesion and stimulated cytokine production, striking differences between PLA-based and chitosan scaffolds were found, with chitosan eliciting increased secretion of tumor necrosis factor (TNF)-α, while PLA-based scaffolds induced higher production of interleukin (IL)-6, IL-12/23 and IL-10. Even though the material itself induced the biggest differences, the scaffold geometry also impacted on TNF-α and IL-12/23 production, with chitosan scaffolds having larger pores and wider angles leading to a higher secretion of these pro-inflammatory cytokines. These findings strengthen the appropriateness of these 3-D platforms to study modulation of macrophage responses by specific parameters (chemistry, topography, scaffold architecture). PMID:24211731

  4. Voxel-based 2-D/3-D registration of fluoroscopy images and CT scans for image-guided surgery.

    PubMed

    Weese, J; Penney, G P; Desmedt, P; Buzug, T M; Hill, D L; Hawkes, D J

    1997-12-01

    Registration of intraoperative fluoroscopy images with preoperative three-dimensional (3-D) CT images can be used for several purposes in image-guided surgery. On the one hand, it can be used to display the position of surgical instruments, which are being tracked by a localizer, in the preoperative CT scan. On the other hand, the registration result can be used to project preoperative planning information or important anatomical structures visible in the CT image onto the fluoroscopy image. For this registration task, a novel voxel-based method in combination with a new similarity measure (pattern intensity) has been developed. The basic concept of the method is explained at the example of two-dimensional (2-D)/3-D registration of a vertebra in an X-ray fluoroscopy image with a 3-D CT image. The registration method is described, and the results for a spine phantom are presented and discussed. Registration has been carried out repeatedly with different starting estimates to study the capture range. Information about registration accuracy has been obtained by comparing the registration results with a highly accurate "ground-truth" registration, which has been derived from fiducial markers attached to the phantom prior to imaging. In addition, registration results for different vertebrae have been compared. The results show that the rotation parameters and the shifts parallel to the projection plane can accurately be determined from a single projection. Because of the projection geometry, the accuracy of the height above the projection plane is significantly lower. PMID:11020832

  5. Low-cost structured-light based 3D capture system design

    NASA Astrophysics Data System (ADS)

    Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.

    2014-03-01

    Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.

  6. Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors

    NASA Astrophysics Data System (ADS)

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz

    2012-02-01

    Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only 3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&- shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased 4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment of respiratory motion effects.

  7. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  8. Polymer-Based Mesh as Supports for Multi-layered 3D Cell Culture and Assays

    PubMed Central

    Simon, Karen A.; Park, Kyeng Min; Mosadegh, Bobak; Subramaniam, Anand Bala; Mazzeo, Aaron; Ngo, Phil M.; Whitesides, George M.

    2013-01-01

    Three-dimensional (3D) culture systems can mimic certain aspects of the cellular microenvironment found in vivo, but generation, analysis and imaging of current model systems for 3D cellular constructs and tissues remain challenging. This work demonstrates a 3D culture system – Cells-in-Gels-in-Mesh (CiGiM) – that uses stacked sheets of polymer-based mesh to support cells embedded in gels to form tissue-like constructs; the stacked sheets can be disassembled by peeling the sheets apart to analyze cultured cells—layer-by-layer—within the construct. The mesh sheets leave openings large enough for light to pass through with minimal scattering, and thus allowing multiple options for analysis—(i) using straightforward analysis by optical light microscopy, (ii) by high-resolution analysis with fluorescence microscopy, or (iii) with a fluorescence gel scanner. The sheets can be patterned into separate zones with paraffin film-based decals, in order to conduct multiple experiments in parallel; the paraffin-based decal films also block lateral diffusion of oxygen effectively. CiGiM simplifies the generation and analysis of 3D culture without compromising throughput, and quality of the data collected: it is especially useful in experiments that require control of oxygen levels, and isolation of adjacent wells in a multi-zone format. PMID:24095253

  9. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  10. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  11. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  12. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information. PMID:26978821

  13. 3D MRI-based tumor delineation of ocular melanoma and its comparison with conventional techniques

    SciTech Connect

    Daftari, Inder k; Aghaian, Elsa; O'Brien, Joan M.; Dillon, William; Phillips, Theodore L.

    2005-11-15

    The aim of this study is to (1) compare the delineation of the tumor volume for ocular melanoma on high-resolution three-dimensional (3D) T2-weighted fast spin echo magnetic resonance imaging (MRI) images with conventional techniques of A- and B-scan ultrasound, transcleral illumination, and placement of tantalum markers around tumor base and (2) to evaluate whether the surgically placed marker ring tumor delineation can be replaced by 3D MRI based tumor delineation. High-resolution 3D T2-weighted fast spin echo (3D FSE) MRI scans were obtained for 60 consecutive ocular melanoma patients using a 1.5 T MRI (GE Medical Systems, Milwaukee, WI), in a standard head coil. These patients were subsequently treated with proton beam therapy at the UC Davis Cyclotron, Davis, CA. The tumor was delineated by placement of tantalum rings (radio-opaque markers) around the tumor periphery as defined by pupillary transillumination during surgery. A point light source, placed against the sclera, was also used to confirm ring agreement with indirect ophthalmoscopy. When necessary, intraoperative ultrasound was also performed. The patients were planned using EYEPLAN software and the tumor volumes were obtained. For analysis, the tumors were divided into four categories based on tumor height and basal diameter. In order to assess the impact of high-resolution 3D T2 FSE MRI, the tumor volumes were outlined on the MRI scans by two independent observers and the tumor volumes calculated for each patient. Six (10%) of 60 patients had tumors, which were not visible on 3D MRI images. These six patients had tumors with tumor heights {<=}3 mm. A small intraobserver variation with a mean of (-0.22{+-}4)% was seen in tumor volumes delineated by 3D T2 FSE MR images. The ratio of tumor volumes measured on MRI to EYEPLAN for the largest to the smallest tumor volumes varied between 0.993 and 1.02 for 54 patients. The tumor volumes measured directly on 3D T2 FSE MRI ranged from 4.03 to 0.075 cm{sup 3

  14. Antiproliferative Activity and Cellular Uptake of Evodiamine and Rutaecarpine Based on 3D Tumor Models.

    PubMed

    Guo, Hui; Liu, Dongmei; Gao, Bin; Zhang, Xiaohui; You, Minli; Ren, Hui; Zhang, Hongbo; Santos, Hélder A; Xu, Feng

    2016-01-01

    Evodiamine (EVO) and rutaecarpine (RUT) are promising anti-tumor drug candidates. The evaluation of the anti-proliferative activity and cellular uptake of EVO and RUT in 3D multicellular spheroids of cancer cells would better recapitulate the native situation and thus better reflect an in vivo response to the treatment. Herein, we employed the 3D culture of MCF-7 and SMMC-7721 cells based on hanging drop method and evaluated the anti-proliferative activity and cellular uptake of EVO and RUT in 3D multicellular spheroids, and compared the results with those obtained from 2D monolayers. The drugs' IC50 values were significantly increased from the range of 6.4-44.1 μM in 2D monolayers to 21.8-138.0 μM in 3D multicellular spheroids, which may be due to enhanced mass barrier and reduced drug penetration in 3D models. The fluorescence of EVO and RUT was measured via fluorescence spectroscopy and the cellular uptake of both drugs was characterized in 2D tumor models. The results showed that the cellular uptake concentrations of RUT increased with increasing drug concentrations. However, the EVO concentrations uptaken by the cells showed only a small change with increasing drug concentrations, which may be due to the different solubility of EVO and Rut in solvents. Overall, this study provided a new vision of the anti-tumor activity of EVO and RUT via 3D multicellular spheroids and cellular uptake through the fluorescence of compounds. PMID:27455219

  15. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  16. Registration of 3D spectral OCT volumes combining ICP with a graph-based approach

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.; Sonka, Milan

    2012-02-01

    The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.

  17. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  18. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  19. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  20. Prediction of enzyme function based on 3D templates of evolutionarily important amino acids

    PubMed Central

    Kristensen, David M; Ward, R Matthew; Lisewski, Andreas Martin; Erdin, Serkan; Chen, Brian Y; Fofanov, Viacheslav Y; Kimmel, Marek; Kavraki, Lydia E; Lichtarge, Olivier

    2008-01-01

    Background Structural genomics projects such as the Protein Structure Initiative (PSI) yield many new structures, but often these have no known molecular functions. One approach to recover this information is to use 3D templates – structure-function motifs that consist of a few functionally critical amino acids and may suggest functional similarity when geometrically matched to other structures. Since experimentally determined functional sites are not common enough to define 3D templates on a large scale, this work tests a computational strategy to select relevant residues for 3D templates. Results Based on evolutionary information and heuristics, an Evolutionary Trace Annotation (ETA) pipeline built templates for 98 enzymes, half taken from the PSI, and sought matches in a non-redundant structure database. On average each template matched 2.7 distinct proteins, of which 2.0 share the first three Enzyme Commission digits as the template's enzyme of origin. In many cases (61%) a single most likely function could be predicted as the annotation with the most matches, and in these cases such a plurality vote identified the correct function with 87% accuracy. ETA was also found to be complementary to sequence homology-based annotations. When matches are required to both geometrically match the 3D template and to be sequence homologs found by BLAST or PSI-BLAST, the annotation accuracy is greater than either method alone, especially in the region of lower sequence identity where homology-based annotations are least reliable. Conclusion These data suggest that knowledge of evolutionarily important residues improves functional annotation among distant enzyme homologs. Since, unlike other 3D template approaches, the ETA method bypasses the need for experimental knowledge of the catalytic mechanism, it should prove a useful, large scale, and general adjunct to combine with other methods to decipher protein function in the structural proteome. PMID:18190718

  1. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  2. SU-F-BRF-08: Conformal Mapping-Based 3D Surface Matching and Registration

    SciTech Connect

    Song, Y; Zeng, W; Gu, X; Liu, C

    2014-06-15

    Purpose: Recently, non-rigid 3D surface matching and registration has been used extensively in engineering and medicine. However, matching 3D surfaces undergoing non-rigid deformation accurately is still a challenging mathematical problem. In this study, we present a novel algorithm to address this issue by introducing intrinsic symmetry to the registration Methods: Our computational algorithm for symmetric conformal mapping is divided into three major steps: 1) Finding the symmetric plane; 2) Finding feature points; and 3) Performing cross registration. The key strategy is to preserve the symmetry during the conformal mapping, such that the image on the parameter domain is symmetric and the area distortion factor on the parameter image is also symmetric. Several novel algorithms were developed using different conformal geometric tools. One was based on solving Riemann-Cauchy equation and the other one employed curvature flow Results: Our algorithm was implemented using generic C++ on Windows XP and used conjugate gradient search optimization for acceleration. The human face 3D surface images were acquired using a high speed 3D scanner based on the phase-shifting method. The scanning speed was 30 frames/sec. The image resolution for each frame was 640 × 480. For 3D human face surfaces with different expressions, postures, and boundaries, our algorithms were able to produce consistent result on the texture pattern on the overlapping region Conclusion: We proposed a novel algorithm to improve the robustness of conformal geometric methods by incorporating the symmetric information into the mapping process. To objectively evaluate its performance, we compared it with most existing techniques. Experimental results indicated that our method outperformed all the others in terms of robustness. The technique has a great potential in real-time patient monitoring and tracking in image-guided radiation therapy.

  3. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA. PMID:21181572

  4. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  5. High-speed 3D face measurement based on color speckle projection

    NASA Astrophysics Data System (ADS)

    Xue, Junpeng; Su, Xianyu; Zhang, Qican

    2015-03-01

    Nowadays, 3D face recognition has become a subject of considerable interest in the security field due to its unique advantages in domestic and international. However, acquiring color-textured 3D faces data in a fast and accurate manner is still highly challenging. In this paper, a new approach based on color speckle projection for 3D face data dynamic acquisition is proposed. Firstly, the projector-camera color crosstalk matrix that indicates how much each projector channel influences each camera channel is measured. Secondly, the reference-speckle-sets images are acquired with CCD, and then three gray sets are separated from the color sets using the crosstalk matrix and are saved. Finally, the color speckle image which is modulated by face is captured, and it is split three gray channels. We measure the 3D face using multi-sets of speckle correlation methods with color speckle image in high-speed similar as one-shot, which greatly improves the measurement accuracy and stability. The suggested approach has been implemented and the results are supported by experiments.

  6. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition

    PubMed Central

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition. PMID:25942404

  7. A Phytic Acid Induced Super-Amphiphilic Multifunctional 3D Graphene-Based Foam.

    PubMed

    Song, Xinhong; Chen, Yiying; Rong, Mingcong; Xie, Zhaoxiong; Zhao, Tingting; Wang, Yiru; Chen, Xi; Wolfbeis, Otto S

    2016-03-14

    Surfaces with super-amphiphilicity have attracted tremendous interest for fundamental and applied research owing to their special affinity to both oil and water. It is generally believed that 3D graphenes are monoliths with strongly hydrophobic surfaces. Herein, we demonstrate the preparation of a 3D super-amphiphilic (that is, highly hydrophilic and oleophilic) graphene-based assembly in a single-step using phytic acid acting as both a gelator and as a dopant. The product shows both hydrophilic and oleophilic intelligence, and this overcomes the drawbacks of presently known hydrophobic 3D graphene assemblies. It can absorb water and oils alike. The utility of the new material was demonstrated by designing a heterogeneous catalytic system through incorporation of a zeolite into its amphiphilic 3D scaffold. The resulting bulk network was shown to enable efficient epoxidation of alkenes without prior addition of a co-solvent or stirring. This catalyst also can be recovered and re-used, thereby providing a clean catalytic process with simplified work-up. PMID:26890034

  8. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition. PMID:25942404

  9. 3D animation of facial plastic surgery based on computer graphics

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  10. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    PubMed Central

    Hou, Wenguang; Zhang, Xuming; Ding, Mingyue

    2013-01-01

    Speckle suppression plays an important role in improving ultrasound (US) image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM) provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU-) based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm. PMID:24348747

  11. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  12. Flexible 3D reconstruction method based on phase-matching in multi-sensor system.

    PubMed

    Wu, Qingyang; Zhang, Baichun; Huang, Jinhui; Wu, Zejun; Zeng, Zeng

    2016-04-01

    Considering the measuring range limitation of a single sensor system, multi-sensor system has become essential in obtaining complete image information of the object in the field of 3D image reconstruction. However, for the traditional multi-sensors worked independently in its system, there was some point in calibrating each sensor system separately. And the calibration between all single sensor systems was complicated and required a long time. In this paper, we present a flexible 3D reconstruction method based on phase-matching in multi-sensor system. While calibrating each sensor, it realizes the data registration of multi-sensor system in a unified coordinate system simultaneously. After all sensors are calibrated, the whole 3D image data directly exist in the unified coordinate system, and there is no need to calibrate the positions between sensors any more. Experimental results prove that the method is simple in operation, accurate in measurement, and fast in 3D image reconstruction. PMID:27137020

  13. 3D printed PLA-based scaffolds: a versatile tool in regenerative medicine.

    PubMed

    Serra, Tiziano; Mateos-Timoneda, Miguel A; Planell, Josep A; Navarro, Melba

    2013-10-01

    Rapid prototyping (RP), also known as additive manufacturing (AM), has been well received and adopted in the biomedical field. The capacity of this family of techniques to fabricate customized 3D structures with complex geometries and excellent reproducibility has revolutionized implantology and regenerative medicine. In particular, nozzle-based systems allow the fabrication of high-resolution polylactic acid (PLA) structures that are of interest in regenerative medicine. These 3D structures find interesting applications in the regenerative medicine field where promising applications including biodegradable templates for tissue regeneration purposes, 3D in vitro platforms for studying cell response to different scaffolds conditions and for drug screening are considered among others. Scaffolds functionality depends not only on the fabrication technique, but also on the material used to build the 3D structure, the geometry and inner architecture of the structure, and the final surface properties. All being crucial parameters affecting scaffolds success. This Commentary emphasizes the importance of these parameters in scaffolds' fabrication and also draws the attention toward the versatility of these PLA scaffolds as a potential tool in regenerative medicine and other medical fields. PMID:23959206

  14. Application of edge-based finite elements and vector ABCs in 3D scattering

    NASA Technical Reports Server (NTRS)

    Chatterjee, A.; Jin, J. M.; Volakis, John L.

    1992-01-01

    A finite element absorbing boundary condition (FE-ABC) solution of the scattering by arbitrary 3-D structures is considered. The computational domain is discretized using edge-based tetrahedral elements. In contrast to the node-based elements, edge elements can treat geometries with sharp edges, are divergence-less, and easily satisfy the field continuity condition across dielectric interfaces. They do, however, lead to a higher unknown count but this is balanced by the greater sparsity of the resulting finite element matrix. Thus, the computation time required to solve such a system iteratively with a given degree of accuracy is less than the traditional node-based approach. The purpose is to examine the derivation and performance of the ABC's when applied to 2-D and 3-D problems and to discuss the specifics of our FE-ABC implementation.

  15. Towards a 3d Based Platform for Cultural Heritage Site Survey and Virtual Exploration

    NASA Astrophysics Data System (ADS)

    Seinturier, J.; Riedinger, C.; Mahiddine, A.; Peloso, D.; Boï, J.-M.; Merad, D.; Drap, P.

    2013-07-01

    This paper present a 3D platform that enables to make both cultural heritage site survey and its virtual exploration. It provides a single and easy way to use framework for merging multi scaled 3D measurements based on photogrammetry, documentation produced by experts and the knowledge of involved domains leaving the experts able to extract and choose the relevant information to produce the final survey. Taking into account the interpretation of the real world during the process of archaeological surveys is in fact the main goal of a survey. New advances in photogrammetry and the capability to produce dense 3D point clouds do not solve the problem of surveys. New opportunities for 3D representation are now available and we must to use them and find new ways to link geometry and knowledge. The new platform is able to efficiently manage and process large 3D data (points set, meshes) thanks to the implementation of space partition methods coming from the state of the art such as octrees and kd-trees and thus can interact with dense point clouds (thousands to millions of points) in real time. The semantisation of raw 3D data relies on geometric algorithms such as geodetic path computation, surface extraction from dense points cloud and geometrical primitive optimization. The platform provide an interface that enables expert to describe geometric representations of interesting objects like ashlar blocs, stratigraphic units or generic items (contour, lines, … ) directly onto the 3D representation of the site and without explicit links to underlying algorithms. The platform provide two ways for describing geometric representation. If oriented photographs are available, the expert can draw geometry on a photograph and the system computes its 3D representation by projection on the underlying mesh or the points cloud. If photographs are not available or if the expert wants to only use the 3D representation then he can simply draw objects shape on it. When 3D

  16. Avalanche for shape and feature-based virtual screening with 3D alignment.

    PubMed

    Diller, David J; Connell, Nancy D; Welsh, William J

    2015-11-01

    This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns. PMID:26458937

  17. Quantitative analysis of the central-chest lymph nodes based on 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Bascom, Rebecca; Mahraj, Rickhesvar P. M.; Higgins, William E.

    2009-02-01

    Lung cancer is the leading cause of cancer death in the United States. In lung-cancer staging, central-chest lymph nodes and associated nodal stations, as observed in three-dimensional (3D) multidetector CT (MDCT) scans, play a vital role. However, little work has been done in relation to lymph nodes, based on MDCT data, due to the complicated phenomena that give rise to them. Using our custom computer-based system for 3D MDCT-based pulmonary lymph-node analysis, we conduct a detailed study of lymph nodes as depicted in 3D MDCT scans. In this work, the Mountain lymph-node stations are automatically defined by the system. These defined stations, in conjunction with our system's image processing and visualization tools, facilitate lymph-node detection, classification, and segmentation. An expert pulmonologist, chest radiologist, and trained technician verified the accuracy of the automatically defined stations and indicated observable lymph nodes. Next, using semi-automatic tools in our system, we defined all indicated nodes. Finally, we performed a global quantitative analysis of the characteristics of the observed nodes and stations. This study drew upon a database of 32 human MDCT chest scans. 320 Mountain-based stations (10 per scan) and 852 pulmonary lymph nodes were defined overall from this database. Based on the numerical results, over 90% of the automatically defined stations were deemed accurate. This paper also presents a detailed summary of central-chest lymph-node characteristics for the first time.

  18. Exploring conformational search protocols for ligand-based virtual screening and 3-D QSAR modeling.

    PubMed

    Cappel, Daniel; Dixon, Steven L; Sherman, Woody; Duan, Jianxin

    2015-02-01

    3-D ligand conformations are required for most ligand-based drug design methods, such as pharmacophore modeling, shape-based screening, and 3-D QSAR model building. Many studies of conformational search methods have focused on the reproduction of crystal structures (i.e. bioactive conformations); however, for ligand-based modeling the key question is how to generate a ligand alignment that produces the best results for a given query molecule. In this work, we study different conformation generation modes of ConfGen and the impact on virtual screening (Shape Screening and e-Pharmacophore) and QSAR predictions (atom-based and field-based). In addition, we develop a new search method, called common scaffold alignment, that automatically detects the maximum common scaffold between each screening molecule and the query to ensure identical coordinates of the common core, thereby minimizing the noise introduced by analogous parts of the molecules. In general, we find that virtual screening results are relatively insensitive to the conformational search protocol; hence, a conformational search method that generates fewer conformations could be considered "better" because it is more computationally efficient for screening. However, for 3-D QSAR modeling we find that more thorough conformational sampling tends to produce better QSAR predictions. In addition, significant improvements in QSAR predictions are obtained with the common scaffold alignment protocol developed in this work, which focuses conformational sampling on parts of the molecules that are not part of the common scaffold. PMID:25408244

  19. Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information

    NASA Astrophysics Data System (ADS)

    Hosoi, F.

    2014-12-01

    Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of

  20. Dynamic WIFI-Based Indoor Positioning in 3D Virtual World

    NASA Astrophysics Data System (ADS)

    Chan, S.; Sohn, G.; Wang, L.; Lee, W.

    2013-11-01

    A web-based system based on the 3DTown project was proposed using Google Earth plug-in that brings information from indoor positioning devices and real-time sensors into an integrated 3D indoor and outdoor virtual world to visualize the dynamics of urban life within the 3D context of a city. We addressed limitation of the 3DTown project with particular emphasis on video surveillance camera used for indoor tracking purposes. The proposed solution was to utilize wireless local area network (WLAN) WiFi as a replacement technology for localizing objects of interest due to the wide spread availability and large coverage area of WiFi in indoor building spaces. Indoor positioning was performed using WiFi without modifying existing building infrastructure or introducing additional access points (AP)s. A hybrid probabilistic approach was used for indoor positioning based on previously recorded WiFi fingerprint database in the Petrie Science and Engineering building at York University. In addition, we have developed a 3D building modeling module that allows for efficient reconstruction of outdoor building models to be integrated with indoor building models; a sensor module for receiving, distributing, and visualizing real-time sensor data; and a web-based visualization module for users to explore the dynamic urban life in a virtual world. In order to solve the problems in the implementation of the proposed system, we introduce approaches for integration of indoor building models with indoor positioning data, as well as real-time sensor information and visualization on the web-based system. In this paper we report the preliminary results of our prototype system, demonstrating the system's capability for implementing a dynamic 3D indoor and outdoor virtual world that is composed of discrete modules connected through pre-determined communication protocols.

  1. Novel 3D bismuth-based coordination polymers: Synthesis, structure, and second harmonic generation properties

    SciTech Connect

    Wibowo, Arief C.; Smith, Mark D.; Yeon, Jeongho; Halasyamani, P. Shiv; Loye, Hans-Conrad zur

    2012-11-15

    Two new 3D bismuth containing coordination polymers are reported along with their single crystal structures and SHG properties. Compound 1: Bi{sub 2}O{sub 2}(pydc) (pydc=pyridine-2, 5-dicarboxylate), crystallizes in the monoclinic, polar space group, P2{sub 1} (a=9.6479(9) A, b=4.2349(4) A, c=11.9615(11) A, {beta}=109.587(1) Degree-Sign ), which contains Bi{sub 2}O{sub 2} chains that are connected into a 3D structure via the pydc ligands. Compound 2: Bi{sub 4}Na{sub 4}(1R3S-cam){sub 8}(EtOH){sub 3.1}(H{sub 2}O){sub 3.4} (1R3S cam=1R3S-camphoric acid) crystallizes in the monoclinic, polar space group, P2{sub 1} (a=19.0855(7) A, b=13.7706(5) A, c=19.2429(7) A, {beta}=90.701(1) Degree-Sign ) and is a true 3D coordination polymer. These are two example of SHG compounds prepared using unsymmetric ligands (compound 1) or chiral ligands (compound 2), together with metals that often exhibit stereochemically-active lone pairs, such as Bi{sup 3+}, a synthetic approach that resulted in polar, non-centrosymmetric, 3D metal-organic coordination polymer. - Graphical Abstract: Structures of two new, polar, 3D Bismuth(III)-based coordination polymers: Bi{sub 2}O{sub 2}(pydc) (compound 1), and Bi{sub 4}Na{sub 4}(1R3S-cam){sub 8}(EtOH){sub 3.1}(H{sub 2}O){sub 3.4} (compound 2). Highlights: Black-Right-Pointing-Pointer New, polar, 3D Bismuth(III)-based coordination polymers. Black-Right-Pointing-Pointer First polar bismuth-based coordination polymers synthesized via a 'hybrid' strategy. Black-Right-Pointing-Pointer Combination of stereochemically-active lone pairs and unsymmetrical or chiral ligands. Black-Right-Pointing-Pointer Synthesis of class C-SHG materials based on Kurtz-Perry categories.

  2. Physics-based approach to haptic display

    NASA Technical Reports Server (NTRS)

    Brown, J. Michael; Colgate, J. Edward

    1994-01-01

    This paper addresses the implementation of complex multiple degree of freedom virtual environments for haptic display. We suggest that a physics based approach to rigid body simulation is appropriate for hand tool simulation, but that currently available simulation techniques are not sufficient to guarantee successful implementation. We discuss the desirable features of a virtual environment simulation, specifically highlighting the importance of stability guarantees.

  3. Graph-Based Compression of Dynamic 3D Point Cloud Sequences.

    PubMed

    Thanou, Dorina; Chou, Philip A; Frossard, Pascal

    2016-04-01

    This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way. PMID:26891486

  4. Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali

    2006-03-01

    An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.

  5. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  6. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  7. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  8. 3D face recognition based on the hierarchical score-level fusion classifiers

    NASA Astrophysics Data System (ADS)

    Mráček, Štěpán.; Váša, Jan; Lankašová, Karolína; Drahanský, Martin; Doležel, Michal

    2014-05-01

    This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.

  9. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  10. An efficient solid modeling system based on a hand-held 3D laser scan device

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.

  11. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  12. 3D Mandibular Superimposition: Comparison of Regions of Reference for Voxel-Based Registration

    PubMed Central

    Ruellas, Antonio Carlos de Oliveira; Yatabe, Marilia Sayako; Souki, Bernardo Quiroga; Benavides, Erika; Nguyen, Tung; Luiz, Ronir Raggio; Franchi, Lorenzo; Cevidanes, Lucia Helena Soares

    2016-01-01

    Introduction The aim was to evaluate three regions of reference (Björk, Modified Björk and mandibular Body) for mandibular registration testing them in a patients’ CBCT sample. Methods Mandibular 3D volumetric label maps were built from CBCTs taken before (T1) and after treatment (T2) in a sample of 16 growing subjects and labeled with eight landmarks. Registrations of T1 and T2 images relative to the different regions of reference were performed, and 3D surface models were generated. Seven mandibular dimensions were measured separately for each time-point (T1 and T2) in relation to a stable reference structure (lingual cortical of symphysis), and the T2-T1 differences were calculated. These differences were compared to differences measured between the superimposed T2 (generated from different regions of reference: Björk, Modified Björk and Mandibular Body) over T1 surface models. ICC and the Bland-Altman method tested the agreement of the changes obtained by nonsuperimposition measurements from the patients’ sample, and changes between the overlapped surfaces after registration using the different regions of reference. Results The Björk region of reference (or mask) did work properly only in 2 of 16 patients. Evaluating the two other masks (Modified Björk and Mandibular body) on patients’ scans registration, the concordance and agreement of the changes obtained from superimpositions (registered T2 over T1) compared to results obtained from non superimposed T1 and T2 separately, indicated that Mandibular Body mask displayed more consistent results. Conclusions The mandibular body mask (mandible without teeth, alveolar bone, rami and condyles) is a reliable reference for 3D regional registration. PMID:27336366

  13. A thermo-responsive and photo-polymerizable chondroitin sulfate-based hydrogel for 3D printing applications.

    PubMed

    Abbadessa, A; Blokzijl, M M; Mouser, V H M; Marica, P; Malda, J; Hennink, W E; Vermonden, T

    2016-09-20

    The aim of this study was to design a hydrogel system based on methacrylated chondroitin sulfate (CSMA) and a thermo-sensitive poly(N-(2-hydroxypropyl) methacrylamide-mono/dilactate)-polyethylene glycol triblock copolymer (M15P10) as a suitable material for additive manufacturing of scaffolds. CSMA was synthesized by reaction of chondroitin sulfate with glycidyl methacrylate (GMA) in dimethylsulfoxide at 50°C and its degree of methacrylation was tunable up to 48.5%, by changing reaction time and GMA feed. Unlike polymer solutions composed of CSMA alone (20% w/w), mixtures based on 2% w/w of CSMA and 18% of M15P10 showed strain-softening, thermo-sensitive and shear-thinning properties more pronounced than those found for polymer solutions based on M15P10 alone. Additionally, they displayed a yield stress of 19.2±7.0Pa. The 3D printing of this hydrogel resulted in the generation of constructs with tailorable porosity and good handling properties. Finally, embedded chondrogenic cells remained viable and proliferating over a culture period of 6days. The hydrogel described herein represents a promising biomaterial for cartilage 3D printing applications. PMID:27261741

  14. PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.

    PubMed

    Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza

    2014-12-01

    The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in. PMID:25118648

  15. Correlation-based discrimination between cardiac tissue and blood for segmentation of 3D echocardiographic images

    NASA Astrophysics Data System (ADS)

    Saris, Anne E. C. M.; Nillesen, Maartje M.; Lopata, Richard G. P.; de Korte, Chris L.

    2013-03-01

    Automated segmentation of 3D echocardiographic images in patients with congenital heart disease is challenging, because the boundary between blood and cardiac tissue is poorly defined in some regions. Cardiologists mentally incorporate movement of the heart, using temporal coherence of structures to resolve ambiguities. Therefore, we investigated the merit of temporal cross-correlation for automated segmentation over the entire cardiac cycle. Optimal settings for maximum cross-correlation (MCC) calculation, based on a 3D cross-correlation based displacement estimation algorithm, were determined to obtain the best contrast between blood and myocardial tissue over the entire cardiac cycle. Resulting envelope-based as well as RF-based MCC values were used as additional external force in a deformable model approach, to segment the left-ventricular cavity in entire systolic phase. MCC values were tested against, and combined with, adaptive filtered, demodulated RF-data. Segmentation results were compared with manually segmented volumes using a 3D Dice Similarity Index (3DSI). Results in 3D pediatric echocardiographic images sequences (n = 4) demonstrate that incorporation of temporal information improves segmentation. The use of MCC values, either alone or in combination with adaptive filtered, demodulated RF-data, resulted in an increase of the 3DSI in 75% of the cases (average 3DSI increase: 0.71 to 0.82). Results might be further improved by optimizing MCC-contrast locally, in regions with low blood-tissue contrast. Reducing underestimation of the endocardial volume due to MCC processing scheme (choice of window size) and consequential border-misalignment, could also lead to more accurate segmentations. Furthermore, increasing the frame rate will also increase MCC-contrast and thus improve segmentation.

  16. Edge features extraction from 3D laser point cloud based on corresponding images

    NASA Astrophysics Data System (ADS)

    Li, Xin-feng; Zhao, Zi-ming; Xu, Guo-qing; Geng, Yan-long

    2013-09-01

    An extraction method of edge features from 3D laser point cloud based on corresponding images was proposed. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image using gray moment algorithm. Then project the sub-pixel edge to the point cloud in fitting scan-lines. At last the edge features were achieved by linking the crossing points. The experimental results demonstrate that the method guarantees accurate fine extraction.

  17. A new algorithm of laser 3D visualization based on space-slice

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Song, Yanfeng; Song, Yong; Cao, Jie; Hao, Qun

    2013-12-01

    Traditional visualization algorithms based on three-dimensional (3D) laser point cloud data consist of two steps: stripe point cloud data into different target objects and establish the 3D surface models of the target objects to realize visualization using interpolation point or surface fitting method. However, some disadvantages, such as low efficiency, loss of image details, exist in most of these algorithms. In order to cope with these problems, a 3D visualization algorithm based on space-slice is proposed in this paper, which includes two steps: data classification and image reconstruction. In the first step, edge detection method is used to check the parametric continuity and extract edges to classify data into different target regions preliminarily. In the second stage, the divided data is split further into space-slice according to coordinates. Based on space-slice of the point cloud data, one-dimensional interpolation methods is adopted to get the curves connected by each group of cloud point data smoother. In the end, these interpolation points obtained from each group are made by the use of getting the fitting surface. As expected, visual morphology of the objects is obtained. The simulation experiment results compared with real scenes show that the final visual images have explicit details and the overall visual result is natural.

  18. Synchrotron radiation-based characterization of interconnections in microelectronics: recent 3D results

    NASA Astrophysics Data System (ADS)

    Bleuet, P.; Audoit, G.; Bertheau, J.; Charbonnier, J.; Cloetens, P.; Djomeni Weleguela, M. L.; Ferreira Sanchez, D.; Hodaj, F.; Gergaud, P.; Lorut, F.; Micha, J.-S.; Thuaire, A.; Ulrich, O.

    2014-09-01

    In microelectronics, more and more attention is paid to the physical characterization of interconnections, to get a better understanding of reliability issues like voiding, cracking and performance degradation. Those interconnections have a 3D architecture with features in the deep sub-micrometer range, requiring a probe with high spatial resolution and high penetration depth. Third generation synchrotron sources are the ideal candidate for that, and we show hereafter the potential of synchrotron-based hard x-ray nanotomography to investigate the morphology of through silicon vias (TSVs) and copper pillars, using projection (holotomography) and scanning (fluorescence) 3D imaging, based on a series of experiments performed at the ESRF. In particular, we highlight the benefits of the method to characterize voids, but also the distribution of intermetallics in copper pillars, which play a critical role for the device reliability. Beyond morphological imaging, an original acquisition scheme based on scanning Laue tomography is introduced. It consists in performing a raster scan (z,θ) of a sample illuminated by a synchrotron polychromatic beam while recording diffraction data. After processing and image reconstruction, it allows for 3D reconstruction of grain orientation, strain and stress in copper TSV and also in the surrounding Si matrix.

  19. Active illumination based 3D surface reconstruction and registration for image guided medialization laryngoplasty

    NASA Astrophysics Data System (ADS)

    Jin, Ge; Lee, Sang-Joon; Hahn, James K.; Bielamowicz, Steven; Mittal, Rajat; Walsh, Raymond

    2007-03-01

    The medialization laryngoplasty is a surgical procedure to improve the voice function of the patient with vocal fold paresis and paralysis. An image guided system for the medialization laryngoplasty will help the surgeons to accurately place the implant and thus reduce the failure rates of the surgery. One of the fundamental challenges in image guided system is to accurately register the preoperative radiological data to the intraoperative anatomical structure of the patient. In this paper, we present a combined surface and fiducial based registration method to register the preoperative 3D CT data to the intraoperative surface of larynx. To accurately model the exposed surface area, a structured light based stereo vision technique is used for the surface reconstruction. We combined the gray code pattern and multi-line shifting to generate the intraoperative surface of the larynx. To register the point clouds from the intraoperative stage to the preoperative 3D CT data, a shape priori based ICP method is proposed to quickly register the two surfaces. The proposed approach is capable of tracking the fiducial markers and reconstructing the surface of larynx with no damage to the anatomical structure. We used off-the-shelf digital cameras, LCD projector and rapid 3D prototyper to develop our experimental system. The final RMS error in the registration is less than 1mm.

  20. Octree-Based SIMD Strategy for Icp Registration and Alignment of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Eggert, D.; Dalyot, S.

    2012-07-01

    Matching and fusion of 3D point clouds, such as close range laser scans, is important for creating an integrated 3D model data infrastructure. The Iterative Closest Point algorithm for alignment of point clouds is one of the most commonly used algorithms for matching of rigid bodies. Evidently, scans are acquired from different positions and might present different data characterization and accuracies, forcing complex data-handling issues. The growing demand for near real-time applications also introduces new computational requirements and constraints into such processes. This research proposes a methodology to solving the computational and processing complexities in the ICP algorithm by introducing specific performance enhancements to enable more efficient analysis and processing. An Octree data structure together with the caching of localized Delaunay triangulation-based surface meshes is implemented to increase computation efficiency and handling of data. Parallelization of the ICP process is carried out by using the Single Instruction, Multiple Data processing scheme - based on the Divide and Conquer multi-branched paradigm - enabling multiple processing elements to be performed on the same operation on multiple data independently and simultaneously. When compared to the traditional non-parallel list processing the Octree-based SIMD strategy showed a sharp increase in computation performance and efficiency, together with a reliable and accurate alignment of large 3D point clouds, contributing to a qualitative and efficient application.

  1. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  2. Meta-Model Based Optimisation Algorithms for Robust Optimization of 3D Forging Sequences

    SciTech Connect

    Fourment, Lionel

    2007-04-07

    In order to handle costly and complex 3D metal forming optimization problems, we develop a new optimization algorithm that allows finding satisfactory solutions within less than 50 iterations (/function evaluation) in the presence of local extrema. It is based on the sequential approximation of the problem objective function by the Meshless Finite Difference Method (MFDM). This changing meta-model allows taking into account the gradient information, if available, or not. It can be easily extended to take into account uncertainties on the optimization parameters. This new algorithm is first evaluated on analytic functions, before being applied to a 3D forging benchmark, the preform tool shape optimization that allows minimizing the potential of fold formation during the two-stepped forging sequence.

  3. [Establishment of database with standard 3D tooth crowns based on 3DS MAX].

    PubMed

    Cheng, Xiaosheng; An, Tao; Liao, Wenhe; Dai, Ning; Yu, Qing; Lu, Peijun

    2009-08-01

    The database with standard 3D tooth crowns has laid the groundwork for dental CAD/CAM system. In this paper, we design the standard tooth crowns in 3DS MAX 9.0 and create a database with these models successfully. Firstly, some key lines are collected from standard tooth pictures. Then we use 3DS MAX 9.0 to design the digital tooth model based on these lines. During the design process, it is important to refer to the standard plaster tooth model. After some tests, the standard tooth models designed with this method are accurate and adaptable; furthermore, it is very easy to perform some operations on the models such as deforming and translating. This method provides a new idea to build the database with standard 3D tooth crowns and a basis for dental CAD/CAM system. PMID:19813628

  4. A web-based 3D visualisation and assessment system for urban precinct scenario modelling

    NASA Astrophysics Data System (ADS)

    Trubka, Roman; Glackin, Stephen; Lade, Oliver; Pettit, Chris

    2016-07-01

    Recent years have seen an increasing number of spatial tools and technologies for enabling better decision-making in the urban environment. They have largely arisen because of the need for cities to be more efficiently planned to accommodate growing populations while mitigating urban sprawl, and also because of innovations in rendering data in 3D being well suited for visualising the urban built environment. In this paper we review a number of systems that are better known and more commonly used in the field of urban planning. We then introduce Envision Scenario Planner (ESP), a web-based 3D precinct geodesign, visualisation and assessment tool, developed using Agile and Co-design methods. We provide a comprehensive account of the tool, beginning with a discussion of its design and development process and concluding with an example use case and a discussion of the lessons learned in its development.

  5. Design and application of a virtual reality 3D engine based on rapid indices

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Mai, Jin

    2007-06-01

    This article proposes a data structure of a 3D engine based on rapid indices. Taking a model for a construction unit, this data structure can construct a coordinate array with 3D vertex rapidly and arrange those vertices in a sequence of triangle strips or triangle fans, which can be rendered rapidly by OpenGL. This data structure is easy to extend. It can hold texture coordinates, normal coordinates of vertices and a model matrix. Other models can be added to it, deleted from it, or transformed by model matrix, so it is flexible. This data structure also improves the render speed of OpenGL when it holds a large amount of data.

  6. Synesthetic art through 3-D projection: The requirements of a computer-based supermedium

    NASA Technical Reports Server (NTRS)

    Mallary, Robert

    1989-01-01

    A computer-based form of multimedia art is proposed that uses the computer to fuse aspects of painting, sculpture, dance, music, film, and other media into a one-to-one synthesia of image and sound for spatially synchronous 3-D projection. Called synesthetic art, this conversion of many varied media into an aesthetically unitary experience determines the character and requirements of the system and its software. During the start-up phase, computer stereographic systems are unsuitable for software development. Eventually, a new type of illusory-projective supermedium will be required to achieve the needed combination of large-format projection and convincing real life presence, and to handle the vast amount of 3-D visual and acoustic information required. The influence of the concept on the author's research and creative work is illustrated through two examples.

  7. Tunable fluorescence enhancement based on bandgap-adjustable 3D Fe3O4 nanoparticles

    NASA Astrophysics Data System (ADS)

    Hu, Fei; Gao, Suning; Zhu, Lili; Liao, Fan; Yang, Lulu; Shao, Mingwang

    2016-06-01

    Great progress has been made in fluorescence-based detection utilizing solid state enhanced substrates in recent years. However, it is still difficult to achieve reliable substrates with tunable enhancement factors. The present work shows liquid fluorescence enhanced substrates consisting of suspensions of Fe3O4 nanoparticles (NPs), which can assemble 3D photonic crystal under the external magnetic field. The photonic bandgap induced by the equilibrium of attractive magnetic force and repulsive electrostatic force between adjacent Fe3O4 NPs is utilized to enhance fluorescence intensity of dye molecules (including R6G, RB, Cy5, DMTPS-DCV) in a reversible and controllable manner. The results show that a maximum of 12.3-fold fluorescence enhancement is realized in the 3D Fe3O4 NP substrates without the utilization of metal particles for PCs/DMTPS-DCV (1.0 × 10‑7 M, water fraction (f w) = 90%).

  8. Nodes Localization in 3D Wireless Sensor Networks Based on Multidimensional Scaling Algorithm

    PubMed Central

    2014-01-01

    In the recent years, there has been a huge advancement in wireless sensor computing technology. Today, wireless sensor network (WSN) has become a key technology for different types of smart environment. Nodes localization in WSN has arisen as a very challenging problem in the research community. Most of the applications for WSN are not useful without a priory known nodes positions. Adding GPS receivers to each node is an expensive solution and inapplicable for indoor environments. In this paper, we implemented and evaluated an algorithm based on multidimensional scaling (MDS) technique for three-dimensional (3D) nodes localization in WSN using improved heuristic method for distance calculation. Using extensive simulations we investigated our approach regarding various network parameters. We compared the results from the simulations with other approaches for 3D-WSN localization and showed that our approach outperforms other techniques in terms of accuracy.

  9. Tunable fluorescence enhancement based on bandgap-adjustable 3D Fe3O4 nanoparticles.

    PubMed

    Hu, Fei; Gao, Suning; Zhu, Lili; Liao, Fan; Yang, Lulu; Shao, Mingwang

    2016-06-17

    Great progress has been made in fluorescence-based detection utilizing solid state enhanced substrates in recent years. However, it is still difficult to achieve reliable substrates with tunable enhancement factors. The present work shows liquid fluorescence enhanced substrates consisting of suspensions of Fe3O4 nanoparticles (NPs), which can assemble 3D photonic crystal under the external magnetic field. The photonic bandgap induced by the equilibrium of attractive magnetic force and repulsive electrostatic force between adjacent Fe3O4 NPs is utilized to enhance fluorescence intensity of dye molecules (including R6G, RB, Cy5, DMTPS-DCV) in a reversible and controllable manner. The results show that a maximum of 12.3-fold fluorescence enhancement is realized in the 3D Fe3O4 NP substrates without the utilization of metal particles for PCs/DMTPS-DCV (1.0 × 10(-7) M, water fraction (f w) = 90%). PMID:27171125

  10. 3D structural analysis of proteins using electrostatic surfaces based on image segmentation

    PubMed Central

    Vlachakis, Dimitrios; Champeris Tsaniras, Spyridon; Tsiliki, Georgia; Megalooikonomou, Vasileios; Kossida, Sophia

    2016-01-01

    Herein, we pr