Sample records for full 3-d view

  1. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.

  2. A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.

    PubMed

    Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G

    2013-03-21

    Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.

  3. Depth assisted compression of full parallax light fields

    NASA Astrophysics Data System (ADS)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  4. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  5. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  6. Shape and 3D acoustically induced vibrations of the human eardrum characterized by digital holography

    NASA Astrophysics Data System (ADS)

    Khaleghi, Morteza; Furlong, Cosme; Cheng, Jeffrey Tao; Rosowski, John J.

    2014-07-01

    The eardrum or Tympanic Membrane (TM) transfers acoustic energy from the ear canal (at the external ear) into mechanical motions of the ossicles (at the middle ear). The acousto-mechanical-transformer behavior of the TM is determined by its shape and mechanical properties. For a better understanding of hearing mysteries, full-field-of-view techniques are required to quantify shape, nanometer-scale sound-induced displacement, and mechanical properties of the TM in 3D. In this paper, full-field-of-view, three-dimensional shape and sound-induced displacement of the surface of the TM are obtained by the methods of multiple wavelengths and multiple sensitivity vectors with lensless digital holography. Using our developed digital holographic systems, unique 3D information such as, shape (with micrometer resolution), 3D acoustically-induced displacement (with nanometer resolution), full strain tensor (with nano-strain resolution), 3D phase of motion, and 3D directional cosines of the displacement vectors can be obtained in full-field-ofview with a spatial resolution of about 3 million points on the surface of the TM and a temporal resolution of 15 Hz.

  7. Large-viewing-angle electroholography by space projection

    NASA Astrophysics Data System (ADS)

    Sato, Koki; Obana, Kazuki; Okumura, Toshimichi; Kanaoka, Takumi; Nishikawa, Satoko; Takano, Kunihiko

    2004-06-01

    The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel ( time shared CGH of RGB three colors ). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.

  8. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.

  9. 3D multi-view convolutional neural networks for lung nodule classification

    PubMed Central

    Kang, Guixia; Hou, Beibei; Zhang, Ningbo

    2017-01-01

    The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492

  10. Characteristics of mist 3D screen for projection type electro-holography

    NASA Astrophysics Data System (ADS)

    Sato, Koki; Okumura, Toshimichi; Kanaoka, Takumi; Koizumi, Shinya; Nishikawa, Satoko; Takano, Kunihiko

    2006-01-01

    The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel (time shared CGH of RGB three colors). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.

  11. Vertical viewing angle enhancement for the 360  degree integral-floating display using an anamorphic optic system.

    PubMed

    Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Yoo, Kwan-Hee; Baasantseren, Ganbat; Park, Jae-Hyeung; Kim, Eun-Soo; Kim, Nam

    2014-04-15

    We propose a 360 degree integral-floating display with an enhanced vertical viewing angle. The system projects two-dimensional elemental image arrays via a high-speed digital micromirror device projector and reconstructs them into 3D perspectives with a lens array. Double floating lenses relate initial 3D perspectives to the center of a vertically curved convex mirror. The anamorphic optic system tailors the initial 3D perspectives horizontally and vertically disperse light rays more widely. By the proposed method, the entire 3D image provides both monocular and binocular depth cues, a full-parallax demonstration with high-angular ray density and an enhanced vertical viewing angle.

  12. Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.

    PubMed

    Kim, Hwi; Hahn, Joonku; Lee, Byoungho

    2009-04-13

    Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.

  13. Barrier Coverage for 3D Camera Sensor Networks

    PubMed Central

    Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-01-01

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder’s face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks. PMID:28771167

  14. Barrier Coverage for 3D Camera Sensor Networks.

    PubMed

    Si, Pengju; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-08-03

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder's face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks.

  15. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  16. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  17. Opportunity View on Sols 1803 and 1804 Stereo

    NASA Image and Video Library

    2009-03-03

    NASA Mars Exploration Rover Opportunity combined images into this full-circle view of the rover surroundings. Tracks from the rover drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. You need 3D glasses.

  18. Opportunity View After Drive on Sol 1806 Stereo

    NASA Image and Video Library

    2009-03-03

    NASA Mars Exploration Rover Opportunity combined images into this full-circle view of the rover surroundings. Tracks from the rover drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. You need 3D glasses.

  19. X-RAY IMAGING Achieving the third dimension using coherence

    DOE PAGES

    Robinson, Ian; Huang, Xiaojing

    2017-01-25

    X-ray imaging is extensively used in medical and materials science. Traditionally, the depth dimension is obtained by turning the sample to gain different views. The famous penetrating properties of X-rays mean that projection views of the subject sample can be readily obtained in the linear absorption regime. 180 degrees of projections can then be combined using computed tomography (CT) methods to obtain a full 3D image, a technique extensively used in medical imaging. In the work now presented in Nature Materials, Stephan Hruszkewycz and colleagues have demonstrated genuine 3D imaging by a new method called 3D Bragg projection ptychography1. Ourmore » approach combines the 'side view' capability of using Bragg diffraction from a crystalline sample with the coherence capabilities of ptychography. Thus, it results in a 3D image from a 2D raster scan of a coherent beam across a sample that does not have to be rotated.« less

  20. Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns

    NASA Astrophysics Data System (ADS)

    Hirayama, Ryuji; Suzuki, Tomotaka; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Naruse, Makoto; Nakayama, Hirotaka; Kakue, Takashi; Ito, Tomoyoshi

    2017-04-01

    In this study, a method to construct a full-colour volumetric display is presented using a commercially available inkjet printer. Photoreactive luminescence materials are minutely and automatically printed as the volume elements, and volumetric displays are constructed with high resolution using easy-to-fabricate means that exploit inkjet printing technologies. The results experimentally demonstrate the first prototype of an inkjet printing-based volumetric display composed of multiple layers of transparent films that yield a full-colour three-dimensional (3D) image. Moreover, we propose a design algorithm with 3D structures that provide multiple different 2D full-colour patterns when viewed from different directions and experimentally demonstrate prototypes. It is considered that these types of 3D volumetric structures and their fabrication methods based on widely deployed existing printing technologies can be utilised as novel information display devices and systems, including digital signage, media art, entertainment and security.

  1. Virtual integral holography

    NASA Astrophysics Data System (ADS)

    Venolia, Dan S.; Williams, Lance

    1990-08-01

    A range of stereoscopic display technologies exist which are no more intrusive, to the user, than a pair of spectacles. Combining such a display system with sensors for the position and orientation of the user's point-of-view results in a greatly enhanced depiction of three-dimensional data. As the point of view changes, the stereo display channels are updated in real time. The face of a monitor or display screen becomes a window on a three-dimensional scene. Motion parallax naturally conveys the placement and relative depth of objects in the field of view. Most of the advantages of "head-mounted display" technology are achieved with a less cumbersome system. To derive the full benefits of stereo combined with motion parallax, both stereo channels must be updated in real time. This may limit the size and complexity of data bases which can be viewed on processors of modest resources, and restrict the use of additional three-dimensional cues, such as texture mapping, depth cueing, and hidden surface elimination. Effective use of "full 3D" may still be undertaken in a non-interactive mode. Integral composite holograms have often been advanced as a powerful 3D visualization tool. Such a hologram is typically produced from a film recording of an object on a turntable, or a computer animation of an object rotating about one axis. The individual frames of film are multiplexed, in a composite hologram, in such a way as to be indexed by viewing angle. The composite may be produced as a cylinder transparency, which provides a stereo view of the object as if enclosed within the cylinder, which can be viewed from any angle. No vertical parallax is usually provided (this would require increasing the dimensionality of the multiplexing scheme), but the three dimensional image is highly resolved and easy to view and interpret. Even a modest processor can duplicate the effect of such a precomputed display, provided sufficient memory and bus bandwidth. This paper describes the components of a stereo display system with user point-of-view tracking for interactive 3D, and a digital realization of integral composite display which we term virtual integral holography. The primary drawbacks of holographic display - film processing turnaround time, and the difficulties of displaying scenes in full color -are obviated, and motion parallax cues provide easy 3D interpretation even for users who cannot see in stereo.

  2. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  3. Autostereoscopic display technology for mobile 3DTV applications

    NASA Astrophysics Data System (ADS)

    Harrold, Jonathan; Woodgate, Graham J.

    2007-02-01

    Mobile TV is now a commercial reality, and an opportunity exists for the first mass market 3DTV products based on cell phone platforms with switchable 2D/3D autostereoscopic displays. Compared to conventional cell phones, TV phones need to operate for extended periods of time with the display running at full brightness, so the efficiency of the 3D optical system is key. The desire for increased viewing freedom to provide greater viewing comfort can be met by increasing the number of views presented. A four view lenticular display will have a brightness five times greater than the equivalent parallax barrier display. Therefore, lenticular displays are very strong candidates for cell phone 3DTV. Selection of Polarisation Activated Microlens TM architectures for LCD, OLED and reflective display applications is described. The technology delivers significant advantages especially for high pixel density panels and optimises device ruggedness while maintaining display brightness. A significant manufacturing breakthrough is described, enabling switchable microlenses to be fabricated using a simple coating process, which is also readily scalable to large TV panels. The 3D image performance of candidate 3DTV panels will also be compared using autostereoscopic display optical output simulations.

  4. TH-AB-201-10: Portal Dosimetry with Elekta IViewDose:Performance of the Simplified Commissioning Approach Versus Full Commissioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kydonieos, M; Folgueras, A; Florescu, L

    2016-06-15

    Purpose: Elekta recently developed a solution for in-vivo EPID dosimetry (iViewDose, Elekta AB, Stockholm, Sweden) in conjunction with the Netherlands Cancer Institute (NKI). This uses a simplified commissioning approach via Template Commissioning Models (TCMs), consisting of a subset of linac-independent pre-defined parameters. This work compares the performance of iViewDose using a TCM commissioning approach with that corresponding to full commissioning. Additionally, the dose reconstruction based on the simplified commissioning approach is validated via independent dose measurements. Methods: Measurements were performed at the NKI on a VersaHD™ (Elekta AB, Stockholm, Sweden). Treatment plans were generated with Pinnacle 9.8 (Philips Medical Systems,more » Eindhoven, The Netherlands). A farmer chamber dose measurement and two EPID images were used to create a linac-specific commissioning model based on a TCM. A complete set of commissioning measurements was collected and a full commissioning model was created.The performance of iViewDose based on the two commissioning approaches was compared via a series of set-to-work tests in a slab phantom. In these tests, iViewDose reconstructs and compares EPID to TPS dose for square fields, IMRT and VMAT plans via global gamma analysis and isocentre dose difference. A clinical VMAT plan was delivered to a homogeneous Octavius 4D phantom (PTW, Freiburg, Germany). Dose was measured with the Octavius 1500 array and VeriSoft software was used for 3D dose reconstruction. EPID images were acquired. TCM-based iViewDose and 3D Octavius dose distributions were compared against the TPS. Results: For both the TCM-based and the full commissioning approaches, the pass rate, mean γ and dose difference were >97%, <0.5 and <2.5%, respectively. Equivalent gamma analysis results were obtained for iViewDose (TCM approach) and Octavius for a VMAT plan. Conclusion: iViewDose produces similar results with the simplified and full commissioning approaches. Good agreement is obtained between iViewDose (simplified approach) and the independent measurement tool. This research is funded by Elekta Limited.« less

  5. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser.

    PubMed

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-03-17

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.

  6. Are 3-D coronal mass ejection parameters from single-view observations consistent with multiview ones?

    NASA Astrophysics Data System (ADS)

    Lee, Harim; Moon, Y.-J.; Na, Hyeonock; Jang, Soojeong; Lee, Jae-Ok

    2015-12-01

    To prepare for when only single-view observations are available, we have made a test whether the 3-D parameters (radial velocity, angular width, and source location) of halo coronal mass ejections (HCMEs) from single-view observations are consistent with those from multiview observations. For this test, we select 44 HCMEs from December 2010 to June 2011 with the following conditions: partial and full HCMEs by SOHO and limb CMEs by twin STEREO spacecraft when they were approximately in quadrature. In this study, we compare the 3-D parameters of the HCMEs from three different methods: (1) a geometrical triangulation method, the STEREO CAT tool developed by NASA/CCMC, for multiview observations using STEREO/SECCHI and SOHO/LASCO data, (2) the graduated cylindrical shell (GCS) flux rope model for multiview observations using STEREO/SECCHI data, and (3) an ice cream cone model for single-view observations using SOHO/LASCO data. We find that the radial velocities and the source locations of the HCMEs from three methods are well consistent with one another with high correlation coefficients (≥0.9). However, the angular widths by the ice cream cone model are noticeably underestimated for broad CMEs larger than 100° and several partial HCMEs. A comparison between the 3-D CME parameters directly measured from twin STEREO spacecraft and the above 3-D parameters shows that the parameters from multiview are more consistent with the STEREO measurements than those from single view.

  7. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    PubMed Central

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-01-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371

  8. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  9. Analysis of the diffraction effects for a multi-view autostereoscopic three-dimensional display system based on shutter parallax barriers with full resolution

    NASA Astrophysics Data System (ADS)

    Meng, Yang; Yu, Zhongyuan; Jia, Fangda; Zhang, Chunyu; Wang, Ye; Liu, Yumin; Ye, Han; Chen, Laurence Lujun

    2017-10-01

    A multi-view autostereoscopic three-dimensional (3D) system is built by using a 2D display screen and a customized parallax-barrier shutter (PBS) screen. The shutter screen is controlled dynamically by address driving matrix circuit and it is placed in front of the display screen at a certain location. The system could achieve densest viewpoints due to its specially optical and geometric design which is based on concept of "eye space". The resolution of 3D imaging is not reduced compared to 2D mode by using limited time division multiplexing technology. The diffraction effects may play an important role in 3D display imaging quality, especially when applied to small screen, such as iPhone screen etc. For small screen, diffraction effects may contribute crosstalk between binocular views, image brightness uniformity etc. Therefore, diffraction effects are analyzed and considered in a one-dimensional shutter screen model of the 3D display, in which the numerical simulation of light from display pixels on display screen through parallax barrier slits to each viewing zone in eye space, is performed. The simulation results provide guidance for criteria screen size over which the impact of diffraction effects are ignorable, and below which diffraction effects must be taken into account. Finally, the simulation results are compared to the corresponding experimental measurements and observation with discussion.

  10. X-ray mosaic nanotomography of large microorganisms.

    PubMed

    Mokso, R; Quaroni, L; Marone, F; Irvine, S; Vila-Comamala, J; Blanke, A; Stampanoni, M

    2012-02-01

    Full-field X-ray microscopy is a valuable tool for 3D observation of biological systems. In the soft X-ray domain organelles can be visualized in individual cells while hard X-ray microscopes excel in imaging of larger complex biological tissue. The field of view of these instruments is typically 10(3) times the spatial resolution. We exploit the assets of the hard X-ray sub-micrometer imaging and extend the standard approach by widening the effective field of view to match the size of the sample. We show that global tomography of biological systems exceeding several times the field of view is feasible also at the nanoscale with moderate radiation dose. We address the performance issues and limitations of the TOMCAT full-field microscope and more generally for Zernike phase contrast imaging. Two biologically relevant systems were investigated. The first being the largest known bacteria (Thiomargarita namibiensis), the second is a small myriapod species (Pauropoda sp.). Both examples illustrate the capacity of the unique, structured condenser based broad-band full-field microscope to access the 3D structural details of biological systems at the nanoscale while avoiding complicated sample preparation, or even keeping the sample environment close to the natural state. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  12. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  13. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  14. A Direction Finding Method with A 3-D Array Based on Aperture Synthesis

    NASA Astrophysics Data System (ADS)

    Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng

    2018-01-01

    Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.

  15. Dynamic electronic collimation method for 3-D catheter tracking on a scanning-beam digital x-ray system

    PubMed Central

    Dunkerley, David A. P.; Slagowski, Jordan M.; Funk, Tobias; Speidel, Michael A.

    2017-01-01

    Abstract. Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3-D catheter tracking. This work proposes a method of dose-reduced 3-D catheter tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. This is achieved through the selective deactivation of focal spot positions not needed for the catheter tracking task. The technique was retrospectively evaluated with SBDX detector data recorded during a phantom study. DEC imaging of a catheter tip at isocenter required 340 active focal spots per frame versus 4473 spots in full field-of-view (FOV) mode. The dose-area product (DAP) and peak skin dose (PSD) for DEC versus full FOV scanning were calculated using an SBDX Monte Carlo simulation code. The average DAP was reduced to 7.8% of the full FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full FOV value. The root-mean-squared-deviation between DEC-based 3-D tracking coordinates and full FOV 3-D tracking coordinates was less than 0.1 mm. The 3-D distance between the tracked tip and the sheath centerline averaged 0.75 mm. DEC is a feasible method for dose reduction during SBDX 3-D catheter tracking. PMID:28439521

  16. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  17. Ultrafast holographic technique for 3D in situ documentation of cultural heritage

    NASA Astrophysics Data System (ADS)

    Frey, Susanne; Bongartz, Jens; Giel, Dominik M.; Thelen, Andrea; Hering, Peter

    2003-10-01

    A novel 3d reconstruction method for medical application has been applied for the examination and documentation of a 2000-year-old bog body. An ultra-fast pulsed holographic camera has been modified to allow imaging of the bog body from different views. Full-scale daylight copies of the master holograms give a detailed impressive three-dimensional view of the mummy and can be exhibited instead of the object. In combination with a rapid prototyping model (built by the Rapid Prototyping group of the Stiftung caesar, Bonn, Germany) derived from computer tomography (CT) data our results are an ideal basis for a future facial reconstruction.

  18. A hybrid 2D/3D inspection concept with smart routing optimisation for high throughput, high dynamic range and traceable critical dimension metrology

    NASA Astrophysics Data System (ADS)

    Jones, Christopher W.; O’Connor, Daniel

    2018-07-01

    Dimensional surface metrology is required to enable advanced manufacturing process control for products such as large-area electronics, microfluidic structures, and light management films, where performance is determined by micrometre-scale geometry or roughness formed over metre-scale substrates. While able to perform 100% inspection at a low cost, commonly used 2D machine vision systems are insufficient to assess all of the functionally relevant critical dimensions in such 3D products on their own. While current high-resolution 3D metrology systems are able to assess these critical dimensions, they have a relatively small field of view and are thus much too slow to keep up with full production speeds. A hybrid 2D/3D inspection concept is demonstrated, combining a small field of view, high-performance 3D topography-measuring instrument with a large field of view, high-throughput 2D machine vision system. In this concept, the location of critical dimensions and defects are first registered using the 2D system, then smart routing algorithms and high dynamic range (HDR) measurement strategies are used to efficiently acquire local topography using the 3D sensor. A motion control platform with a traceable position referencing system is used to recreate various sheet-to-sheet and roll-to-roll inline metrology scenarios. We present the artefacts and procedures used to calibrate this hybrid sensor system for traceable dimensional measurement, as well as exemplar measurement of optically challenging industrial test structures.

  19. Physical modeling of 3D and 4D laser imaging

    NASA Astrophysics Data System (ADS)

    Anna, Guillaume; Hamoir, Dominique; Hespel, Laurent; Lafay, Fabien; Rivière, Nicolas; Tanguy, Bernard

    2010-04-01

    Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.

  20. Evaluation of viewing experiences induced by a curved three-dimensional display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-10-01

    Despite an increased need for three-dimensional (3-D) functionality in curved displays, comparisons pertinent to human factors between curved and flat panel 3-D displays have rarely been tested. This study compared stereoscopic 3-D viewing experiences induced by a curved display with those of a flat panel display by evaluating subjective and objective measures. Twenty-four participants took part in the experiments and viewed 3-D content with two different displays (flat and curved 3-D display) within a counterbalanced and within-subject design. For the 30-min viewing condition, a paired t-test showed significantly reduced P300 amplitudes, which were caused by engagement rather than cognitive fatigue, in the curved 3-D viewing condition compared to the flat 3-D viewing condition at P3 and P4. No significant differences in P300 amplitudes were observed for 60-min viewing. Subjective ratings of realness and engagement were also significantly higher in the curved 3-D viewing condition than in the flat 3-D viewing condition for 30-min viewing. Our findings support that curved 3-D displays can be effective for enhancing engagement among viewers based on specific viewing times and environments.

  1. Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation

    NASA Astrophysics Data System (ADS)

    Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.

    2016-03-01

    Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a regionof- interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance.

  2. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  3. 3-D movies using microprocessor-controlled optoelectronic spectacles

    NASA Astrophysics Data System (ADS)

    Jacobs, Ken; Karpf, Ron

    2012-02-01

    Despite rapid advances in technology, 3-D movies are impractical for general movie viewing. A new approach that opens all content for casual 3-D viewing is needed. 3Deeps--advanced microprocessor controlled optoelectronic spectacles--provides such a new approach to 3-D. 3Deeps works on a different principle than other methods for 3-D. 3-D movies typically use the asymmetry of dual images to produce stereopsis, necessitating costly dual-image content, complex formatting and transmission standards, and viewing via a corresponding selection device. In contrast, all 3Deeps requires to view movies in realistic depth is an illumination asymmetry--a controlled difference in optical density between the lenses. When a 2-D movie has been projected for viewing, 3Deeps converts every scene containing lateral motion into realistic 3-D. Put on 3Deeps spectacles for 3-D viewing, or remove them for viewing in 2-D. 3Deeps works for all analogue and digital 2-D content, by any mode of transmission, and for projection screens, digital or analogue monitors. An example using aerial photography is presented. A movie consisting of successive monoscopic aerial photographs appears in realistic 3-D when viewed through 3Deeps spectacles.

  4. VP-Nets : Efficient automatic localization of key brain structures in 3D fetal neurosonography.

    PubMed

    Huang, Ruobing; Xie, Weidi; Alison Noble, J

    2018-04-23

    Three-dimensional (3D) fetal neurosonography is used clinically to detect cerebral abnormalities and to assess growth in the developing brain. However, manual identification of key brain structures in 3D ultrasound images requires expertise to perform and even then is tedious. Inspired by how sonographers view and interact with volumes during real-time clinical scanning, we propose an efficient automatic method to simultaneously localize multiple brain structures in 3D fetal neurosonography. The proposed View-based Projection Networks (VP-Nets), uses three view-based Convolutional Neural Networks (CNNs), to simplify 3D localizations by directly predicting 2D projections of the key structures onto three anatomical views. While designed for efficient use of data and GPU memory, the proposed VP-Nets allows for full-resolution 3D prediction. We investigated parameters that influence the performance of VP-Nets, e.g. depth and number of feature channels. Moreover, we demonstrate that the model can pinpoint the structure in 3D space by visualizing the trained VP-Nets, despite only 2D supervision being provided for a single stream during training. For comparison, we implemented two other baseline solutions based on Random Forest and 3D U-Nets. In the reported experiments, VP-Nets consistently outperformed other methods on localization. To test the importance of loss function, two identical models are trained with binary corss-entropy and dice coefficient loss respectively. Our best VP-Net model achieved prediction center deviation: 1.8 ± 1.4 mm, size difference: 1.9 ± 1.5 mm, and 3D Intersection Over Union (IOU): 63.2 ± 14.7% when compared to the ground truth. To make the whole pipeline intervention free, we also implement a skull-stripping tool using 3D CNN, which achieves high segmentation accuracy. As a result, the proposed processing pipeline takes a raw ultrasound brain image as input, and output a skull-stripped image with five detected key brain structures. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Full Disk Image of the Sun, March 26, 2007 Anaglyph

    NASA Image and Video Library

    2007-04-27

    NASA Solar TErrestrial RElations Observatory STEREO satellites have provided the first three-dimensional images of the Sun. The structure of the corona shows well in this image. 3D glasses are necessary to view this image.

  6. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  7. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  8. Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.

    PubMed

    Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa

    2017-09-01

    Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a noninvasive tool to efficiently localize arrhythmias in 3-D. © 2017 American Association of Physicists in Medicine.

  9. Synthesis multi-projector content for multi-projector three dimension display using a layered representation

    NASA Astrophysics Data System (ADS)

    Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua

    2014-11-01

    Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.

  10. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    NASA Astrophysics Data System (ADS)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  11. Full-parallax 3D display from stereo-hybrid 3D camera system

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  12. Evaluating mental workload of two-dimensional and three-dimensional visualization for anatomical structure localization.

    PubMed

    Foo, Jung-Leng; Martinez-Escobar, Marisol; Juhnke, Bethany; Cassidy, Keely; Hisley, Kenneth; Lobe, Thom; Winer, Eliot

    2013-01-01

    Visualization of medical data in three-dimensional (3D) or two-dimensional (2D) views is a complex area of research. In many fields 3D views are used to understand the shape of an object, and 2D views are used to understand spatial relationships. It is unclear how 2D/3D views play a role in the medical field. Using 3D views can potentially decrease the learning curve experienced with traditional 2D views by providing a whole representation of the patient's anatomy. However, there are challenges with 3D views compared with 2D. This current study expands on a previous study to evaluate the mental workload associated with both 2D and 3D views. Twenty-five first-year medical students were asked to localize three anatomical structures--gallbladder, celiac trunk, and superior mesenteric artery--in either 2D or 3D environments. Accuracy and time were taken as the objective measures for mental workload. The NASA Task Load Index (NASA-TLX) was used as a subjective measure for mental workload. Results showed that participants viewing in 3D had higher localization accuracy and a lower subjective measure of mental workload, specifically, the mental demand component of the NASA-TLX. Results from this study may prove useful for designing curricula in anatomy education and improving training procedures for surgeons.

  13. A beam-splitter-type 3-D endoscope for front view and front-diagonal view images.

    PubMed

    Kamiuchi, Hiroki; Masamune, Ken; Kuwana, Kenta; Dohi, Takeyoshi; Kim, Keri; Yamashita, Hiromasa; Chiba, Toshio

    2013-01-01

    In endoscopic surgery, surgeons must manipulate an endoscope inside the body cavity to observe a large field-of-view while estimating the distance between surgical instruments and the affected area by reference to the size or motion of the surgical instruments in 2-D endoscopic images on a monitor. Therefore, there is a risk of the endoscope or surgical instruments physically damaging body tissues. To overcome this problem, we developed a Ø7- mm 3-D endoscope that can switch between providing front and front-diagonal view 3-D images by simply rotating its sleeves. This 3-D endoscope consists of a conventional 3-D endoscope and an outer and inner sleeve with a beam splitter and polarization plates. The beam splitter was used for visualizing both the front and front-diagonal view and was set at 25° to the outer sleeve's distal end in order to eliminate a blind spot common to both views. Polarization plates were used to avoid overlap of the two views. We measured signal-to-noise ratio (SNR), sharpness, chromatic aberration (CA), and viewing angle of this 3-D endoscope and evaluated its feasibility in vivo. Compared to the conventional 3-D endoscope, SNR and sharpness of this 3-D endoscope decreased by 20 and 7 %, respectively. No significant difference was found in CA. The viewing angle for both the front and front-diagonal views was about 50°. In the in vivo experiment, this 3-D endoscope can provide clear 3-D images of both views by simply rotating its inner sleeve. The developed 3-D endoscope can provide the front and front-diagonal view by simply rotating the inner sleeve, therefore the risk of damage to fragile body tissues can be significantly decreased.

  14. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  15. Imaging of particles with 3D full parallax mode with two-color digital off-axis holography

    NASA Astrophysics Data System (ADS)

    Kara-Mohammed, Soumaya; Bouamama, Larbi; Picart, Pascal

    2018-05-01

    This paper proposes an approach based on two orthogonal views and two wavelengths for recording off-axis two-color holograms. The approach permits to discriminate particles aligned along the sight-view axis. The experimental set-up is based on a double Mach-Zehnder architecture in which two different wavelengths provides the reference and the object beams. The digital processing to get images from the particles is based on convolution so as to obtain images with no wavelength dependence. The spatial bandwidth of the angular spectrum transfer function is adapted in order to increase the maximum reconstruction distance which is generally limited to a few tens of millimeters. In order to get the images of particles in the 3D volume, a calibration process is proposed and is based on the modulation theorem to perfectly superimpose the two views in a common XYZ axis. The experimental set-up is applied to two-color hologram recording of moving non-calibrated opaque particles with average diameter at about 150 μm. After processing the two-color holograms with image reconstruction and view calibration, the location of particles in the 3D volume can be obtained. Particularly, ambiguity about close particles, generating hidden particles in a single-view scheme, can be removed to determine the exact number of particles in the region of interest.

  16. Evaluation of usefulness of 3D views for clinical photography.

    PubMed

    Jinnin, Masatoshi; Fukushima, Satoshi; Masuguchi, Shinichi; Tanaka, Hiroki; Kawashita, Yoshio; Ishihara, Tsuyoshi; Ihn, Hironobu

    2011-01-01

    This is the first report investigating the usefulness of a 3D viewing technique (parallel viewing and cross-eyed viewing) for presenting clinical photography. Using the technique, we can grasp 3D structure of various lesions (e.g. tumors, wounds) or surgical procedures (e.g. lymph node dissection, flap) much more easily even without any cost and optical aids compared to 2D photos. Most recently 3D cameras started to be commercially available, but they may not be useful for presentation in scientific papers or poster sessions. To create a stereogram, two different pictures were taken from the right and left eye views using a digital camera. Then, the two pictures were placed next to one another. Using 9 stereograms, we performed a questionnaire-based survey. Our survey revealed 57.7% of the doctors/students had acquired the 3D viewing technique and an additional 15.4% could learn parallel viewing with 10 minutes training. Among the subjects capable of 3D views, 73.7% used the parallel view technique whereas only 26.3% chose the cross-eyed view. There was no significant difference in the results of the questionnaire about the efficiency and usefulness of 3D views between parallel view users and cross-eyed users. Almost all subjects (94.7%) answered that the technique is useful. Lesions with multiple undulations are a good application. 3D views, especially parallel viewing, are likely to be common and easy enough to consider for practical use in doctors/students. The wide use of the technique may revolutionize presentation of clinical pictures in meetings, educational lectures, or manuscripts.

  17. How does c-view image quality compare with conventional 2D FFDM?

    PubMed

    Nelson, Jeffrey S; Wells, Jered R; Baker, Jay A; Samei, Ehsan

    2016-05-01

    The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to compare the intrinsic image quality of synthesized 2D c-view and 2D FFDM images in terms of resolution, contrast, and noise. Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than c-view according to both the average observer and automated scores. In addition, between 50% and 70% of c-view images failed to meet the nominal minimum ACR accreditation requirements-primarily due to fiber breaks. Software analysis demonstrated that c-view provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the c-view image (11 lp/mm FFDM, 5 lp/mm c-view) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with c-view. Whereas the FFDM image contained approximately white noise texture, the c-view image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Their analysis demonstrates many instances where the c-view image quality differs from FFDM. Compared to FFDM, c-view offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of c-view images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + c-view performs relative to DBT + FFDM or FFDM alone.

  18. Effect of Illumination on Ocular Status Modifications Induced by Short-Term 3D TV Viewing

    PubMed Central

    Chen, Yuanyuan; Xu, Aiqin; Jiang, Jian

    2017-01-01

    Objectives. This study aimed to compare changes in ocular status after 3D TV viewing under three modes of illumination and thereby identify optimal illumination for 3D TV viewing. Methods. The following measures of ocular status were assessed: the accommodative response, accommodative microfluctuation, accommodative facility, relative accommodation, gradient accommodative convergence/accommodation (AC/A) ratio, phoria, and fusional vergence. The observers watched 3D television for 90 minutes through 3D shutter glasses under three illumination modes: A, complete darkness; B, back illumination (50 lx); and C, front illumination (130 lx). The ocular status of the observers was assessed both before and after the viewing. Results. After 3D TV viewing, the accommodative response and accommodative microfluctuation were significantly changed under illumination Modes A and B. The near positive fusional vergence decreased significantly after the 90-minute 3D viewing session under each illumination mode, and this effect was not significantly different among the three modes. Conclusions. Short-term 3D viewing modified the ocular status of adults. The least amount of such change occurred with front illumination, suggesting that this type of illumination is an appropriate mode for 3D shutter TV viewing. PMID:28348893

  19. Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation [Invited].

    PubMed

    Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu

    2018-01-01

    We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.

  20. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-04-14

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

  1. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  2. Comparison of learning curves and skill transfer between classical and robotic laparoscopy according to the viewing conditions: implications for training.

    PubMed

    Blavier, Adélaïde; Gaudissart, Quentin; Cadière, Guy-Bernard; Nyssen, Anne-Sophie

    2007-07-01

    The purpose of this study was to evaluate the perceptual (2-dimensional [2D] vs. 3-dimensional [3D] view) and instrumental (classical vs. robotic) impacts of new robotic system on learning curves. Forty medical students without any surgical experience were randomized into 4 groups (classical laparoscopy with 3D-direct view or with 2D-indirect view, robotic system in 3D or in 2D) and repeated a laparoscopic task 6 times. After these 6 repetitions, they performed 2 trials with the same technique but in the other viewing condition (perceptive switch). Finally, subjects performed the last 3 trials with the technique they never used (technical switch). Subjects evaluated their performance answering a questionnaire (impressions of mastery, familiarity, satisfaction, self-confidence, and difficulty). Our study showed better performance and improvement in 3D view than in 2D view whatever the instrumental aspect. Participants reported less mastery, familiarity, and self-confidence and more difficulty in classical laparoscopy with 2D-indirect view than in the other conditions. Robotic surgery improves surgical performance and learning, particularly by 3D view advantage. However, perceptive and technical switches emphasize the need to adapt and pursue training also with traditional technology to prevent risks in conversion procedure.

  3. A study to evaluate the reliability of using two-dimensional photographs, three-dimensional images, and stereoscopic projected three-dimensional images for patient assessment.

    PubMed

    Zhu, S; Yang, Y; Khambay, B

    2017-03-01

    Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all P<0.05). For 75% of the raters, stereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  4. Clinical evaluation of accommodation and ocular surface stability relavant to visual asthenopia with 3D displays

    PubMed Central

    2014-01-01

    Background To validate the association between accommodation and visual asthenopia by measuring objective accommodative amplitude with the Optical Quality Analysis System (OQAS®, Visiometrics, Terrassa, Spain), and to investigate associations among accommodation, ocular surface instability, and visual asthenopia while viewing 3D displays. Methods Fifteen normal adults without any ocular disease or surgical history watched the same 3D and 2D displays for 30 minutes. Accommodative ability, ocular protection index (OPI), and total ocular symptom scores were evaluated before and after viewing the 3D and 2D displays. Accommodative ability was evaluated by the near point of accommodation (NPA) and OQAS to ensure reliability. The OPI was calculated by dividing the tear breakup time (TBUT) by the interblink interval (IBI). The changes in accommodative ability, OPI, and total ocular symptom scores after viewing 3D and 2D displays were evaluated. Results Accommodative ability evaluated by NPA and OQAS, OPI, and total ocular symptom scores changed significantly after 3D viewing (p = 0.005, 0.003, 0.006, and 0.003, respectively), but yielded no difference after 2D viewing. The objective measurement by OQAS verified the decrease of accommodative ability while viewing 3D displays. The change of NPA, OPI, and total ocular symptom scores after 3D viewing had a significant correlation (p < 0.05), implying direct associations among these factors. Conclusions The decrease of accommodative ability after 3D viewing was validated by both subjective and objective methods in our study. Further, the deterioration of accommodative ability and ocular surface stability may be causative factors of visual asthenopia in individuals viewing 3D displays. PMID:24612686

  5. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284

  6. A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.

    PubMed

    Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M

    2011-01-20

    A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.

  7. Three-dimensional ultrasound imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Fenster, Aaron; Downey, Donal B.

    1999-05-01

    Ultrasonography, a widely used imaging modality for the diagnosis and staging of many diseases, is an important cost- effective technique, however, technical improvements are necessary to realize its full potential. Two-dimensional viewing of 3D anatomy, using conventional ultrasonography, limits our ability to quantify and visualize most diseases, causing, in part, the reported variability in diagnosis and ultrasound guided therapy and surgery. This occurs because conventional ultrasound images are 2D, yet the anatomy is 3D; hence the diagnostician must integrate multiple images in his mind. This practice is inefficient, and may lead to operator variability and incorrect diagnoses. In addition, the 2D ultrasound image represents a single thin plane at some arbitrary angle in the body. It is difficult to localize and reproduce the image plane subsequently, making conventional ultrasonography unsatisfactory for follow-up studies and for monitoring therapy. Our efforts have focused on overcoming these deficiencies by developing 3D ultrasound imaging techniques that can acquire B-mode, color Doppler and power Doppler images. An inexpensive desktop computer is used to reconstruct the information in 3D, and then is also used for interactive viewing of the 3D images. We have used 3D ultrasound images for the diagnosis of prostate cancer, carotid disease, breast cancer and liver disease and for applications in obstetrics and gynecology. In addition, we have also used 3D ultrasonography for image-guided minimally invasive therapeutic applications of the prostate such as cryotherapy and brachytherapy.

  8. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture.

    PubMed

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy.

  9. ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.

    PubMed

    Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas

    2018-06-24

    ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.

  10. Projection x-space magnetic particle imaging.

    PubMed

    Goodwill, Patrick W; Konkle, Justin J; Zheng, Bo; Saritas, Emine U; Conolly, Steven M

    2012-05-01

    Projection magnetic particle imaging (MPI) can improve imaging speed by over 100-fold over traditional 3-D MPI. In this work, we derive the 2-D x-space signal equation, 2-D image equation, and introduce the concept of signal fading and resolution loss for a projection MPI imager. We then describe the design and construction of an x-space projection MPI scanner with a field gradient of 2.35 T/m across a 10 cm magnet free bore. The system has an expected resolution of 3.5 × 8.0 mm using Resovist tracer, and an experimental resolution of 3.8 × 8.4 mm resolution. The system images 2.5 cm × 5.0 cm partial field-of views (FOVs) at 10 frames/s, and acquires a full field-of-view of 10 cm × 5.0 cm in 4 s. We conclude by imaging a resolution phantom, a complex "Cal" phantom, mice injected with Resovist tracer, and experimentally confirm the theoretically predicted x-space spatial resolution.

  11. How does C-VIEW image quality compare with conventional 2D FFDM?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Jeffrey S., E-mail: nelson.jeffrey@duke.edu; Wells, Jered R.; Baker, Jay A.

    Purpose: The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to comparemore » the intrinsic image quality of synthesized 2D C-VIEW and 2D FFDM images in terms of resolution, contrast, and noise. Methods: Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Results: Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than C-VIEW according to both the average observer and automated scores. In addition, between 50% and 70% of C-VIEW images failed to meet the nominal minimum ACR accreditation requirements—primarily due to fiber breaks. Software analysis demonstrated that C-VIEW provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the C-VIEW image (11 lp/mm FFDM, 5 lp/mm C-VIEW) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with C-VIEW. Whereas the FFDM image contained approximately white noise texture, the C-VIEW image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Conclusions: Their analysis demonstrates many instances where the C-VIEW image quality differs from FFDM. Compared to FFDM, C-VIEW offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of C-VIEW images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + C-VIEW performs relative to DBT + FFDM or FFDM alone.« less

  12. Viewing 3D TV over two months produces no discernible effects on balance, coordination or eyesight

    PubMed Central

    Read, Jenny C.A.; Godfrey, Alan; Bohr, Iwo; Simonotto, Jennifer; Galna, Brook; Smulders, Tom V.

    2016-01-01

    Abstract With the rise in stereoscopic 3D media, there has been concern that viewing stereoscopic 3D (S3D) content could have long-term adverse effects, but little data are available. In the first study to address this, 28 households who did not currently own a 3D TV were given a new TV set, either S3D or 2D. The 116 members of these households all underwent tests of balance, coordination and eyesight, both before they received their new TV set, and after they had owned it for 2 months. We did not detect any changes which appeared to be associated with viewing 3D TV. We conclude that viewing 3D TV does not produce detectable effects on balance, coordination or eyesight over the timescale studied. Practitioner Summary: Concern has been expressed over possible long-term effects of stereoscopic 3D (S3D). We looked for any changes in vision, balance and coordination associated with normal home S3D TV viewing in the 2 months after first acquiring a 3D TV. We find no evidence of any changes over this timescale. PMID:26758965

  13. 2D Echocardiographic Evaluation of Right Ventricular Function Correlates With 3D Volumetric Models in Cardiac Surgery Patients.

    PubMed

    Magunia, Harry; Schmid, Eckhard; Hilberath, Jan N; Häberle, Leo; Grasshoff, Christian; Schlensak, Christian; Rosenberger, Peter; Nowak-Machen, Martina

    2017-04-01

    The early diagnosis and treatment of right ventricular (RV) dysfunction are of critical importance in cardiac surgery patients and impact clinical outcome. Two-dimensional (2D) transesophageal echocardiography (TEE) can be used to evaluate RV function using surrogate parameters due to complex RV geometry. The aim of this study was to evaluate whether the commonly used visual evaluation of RV function and size using 2D TEE correlated with the calculated three-dimensional (3D) volumetric models of RV function. Retrospective study, single center, University Hospital. Seventy complete datasets were studied consisting of 2D 4-chamber view loops (2-3 beats) and the corresponding 4-chamber view 3D full-volume loop of the right ventricle. RV function and RV size of the 2D loops then were assessed retrospectively purely qualitatively individually by 4 clinician echocardiographers certified in perioperative TEE. Corresponding 3D volumetric models calculating RV ejection fraction and RV end-diastolic volumes then were established and compared with the 2D assessments. 2D assessment of RV function correlated with 3D volumetric calculations (Spearman's rho -0.5; p<0.0001). No correlation could be established between 2D estimates of RV size and actual 3D volumetric end-diastolic volumes (Spearman's rho 0.15; p = 0.25). The 2D assessment of right ventricular function based on visual estimation as frequently used in clinical practice appeared to be a reliable method of RV functional evaluation. However, 2D assessment of RV size seemed unreliable and should be used with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. User experience while viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C.A.; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550

  15. Automatic view synthesis by image-domain-warping.

    PubMed

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

  16. The Influence on Humans of Long Hours of Viewing 3D Movies

    NASA Astrophysics Data System (ADS)

    Kawamura, Yuta; Horie, Yusuke; Sano, Keisuke; Kodama, Hiroya; Tsunoda, Naoki; Shibuta, Yuki; Kawachi, Yuki; Yamada, Mitsuho

    Three-dimensional (3D) movies have become very popular in movie theaters and for home viewing, To date, there has been no report of the effects of the continual vergence eye movement that occurs when viewing 3D movies from the beginning to the end. First, we analyzed the influence of viewing a 3D movie for several hours on vergence eye movement. At the same time, we investigated the influence of long viewing on the human body, using the Simulator Sickness Questionnaire (SSQ) and critical fusion frequency (CFF). It was suggested that the vergence stable time after saccade when viewing a long movie was influenced by the viewing time and that the vergence stable time after saccade depended on the content of the movie. Also the differences were seen in the SSQ and CFF between the movie's beginning and its ending when viewing a 3D movie.

  17. Tracking a head-mounted display in a room-sized environment with head-mounted cameras

    NASA Astrophysics Data System (ADS)

    Wang, Jih-Fang; Azuma, Ronald T.; Bishop, Gary; Chi, Vernon; Eyles, John; Fuchs, Henry

    1990-10-01

    This paper presents our efforts to accurately track a Head-Mounted Display (HMD) in a large environment. We review our current benchtop prototype (introduced in {WCF9O]), then describe our plans for building the full-scale system. Both systems use an inside-oui optical tracking scheme, where lateraleffect photodiodes mounted on the user's helmet view flashing infrared beacons placed in the environment. Church's method uses the measured 2D image positions and the known 3D beacon locations to recover the 3D position and orientation of the helmet in real-time. We discuss the implementation and performance of the benchtop prototype. The full-scale system design includes ceiling panels that hold the infrared beacons and a new sensor arrangement of two photodiodes with holographic lenses. In the full-scale system, the user can walk almost anywhere under the grid of ceiling panels, making the working volume nearly as large as the room.

  18. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    NASA Astrophysics Data System (ADS)

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-02-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.

  19. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture

    PubMed Central

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L.

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy. PMID:29238513

  20. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  1. Perceptual video quality comparison of 3DTV broadcasting using multimode service systems

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Lee, Chulhee

    2015-05-01

    Multimode service (MMS) systems allow broadcasters to provide multichannel services using a single HD channel. Using these systems, it is possible to provide 3DTV programs that can be watched either in three-dimensional (3-D) or two-dimensional (2-D) modes with backward compatibility. In the MMS system for 3DTV broadcasting using the Advanced Television Systems Committee standards, the left and the right views are encoded using MPEG-2 and H.264, respectively, and then transmitted using a dual HD streaming format. The left view, encoded using MPEG-2, assures 2-D backward compatibility while the right view, encoded using H.264, can be optionally combined with the left view to generate stereoscopic 3-D views. We analyze 2-D and 3-D perceptual quality when using the MMS system by comparing items in the frame-compatible format (top-bottom), which is a conventional transmission scheme for 3-D broadcasting. We performed perceptual 2-D and 3-D video quality evaluation assuming 3DTV programs are encoded using the MMS system and top-bottom format. The results show that MMS systems can be preferable with regard to perceptual 2-D and 3-D quality and backward compatibility.

  2. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  3. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  4. Balance and coordination after viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C. A.; Simonotto, Jennifer; Bohr, Iwo; Godfrey, Alan; Galna, Brook; Rochester, Lynn; Smulders, Tom V.

    2015-01-01

    Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4–82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination. PMID:26587261

  5. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  6. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.

    PubMed

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-10-22

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  7. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  8. Neighboring block based disparity vector derivation for multiview compatible 3D-AVC

    NASA Astrophysics Data System (ADS)

    Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta

    2013-09-01

    3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.

  9. Full-view 3D imaging system for functional and anatomical screening of the breast

    NASA Astrophysics Data System (ADS)

    Oraevsky, Alexander; Su, Richard; Nguyen, Ha; Moore, James; Lou, Yang; Bhadra, Sayantan; Forte, Luca; Anastasio, Mark; Yang, Wei

    2018-04-01

    Laser Optoacoustic Ultrasonic Imaging System Assembly (LOUISA-3D) was developed in response to demand of diagnostic radiologists for an advanced screening system for the breast to improve on low sensitivity of x-ray based modalities of mammography and tomosynthesis in the dense and heterogeneous breast and low specificity magnetic resonance imaging. It is our working hypothesis that co-registration of quantitatively accurate functional images of the breast vasculature and microvasculature, and anatomical images of breast morphological structures will provide a clinically viable solution for the breast cancer care. Functional imaging is LOUISA-3D is enabled by the full view 3D optoacoustic images acquired at two rapidly toggling laser wavelengths in the near-infrared spectral range. 3D images of the breast anatomical background is enabled in LOUISA-3D by a sequence of B-mode ultrasound slices acquired with a transducer array rotating around the breast. This creates the possibility to visualize distributions of the total hemoglobin and blood oxygen saturation within specific morphological structures such as tumor angiogenesis microvasculature and larger vasculature in proximity of the tumor. The system has four major components: (i) a pulsed dual wavelength laser with fiberoptic light delivery system, (ii) an imaging module with two arc shaped probes (optoacoustic and ultrasonic) placed in a transparent bowl that rotates around the breast, (iii) a multichannel electronic system with analog preamplifiers and digital data acquisition boards, and (iv) computer for the system control, data processing and image reconstruction. The most important advancement of this latest system design compared with previously reported systems is the full breast illumination accomplished for each rotational step of the optoacoustic transducer array using fiberoptic illuminator rotating around the breast independently from rotation of the detector probe. We report here a pilot case studies on one healthy volunteer and on patient with a suspicious small lesion in the breast. LOUISA3D visualized deoxygenated veins and oxygenated arteries of a healthy volunteer, indicative of its capability to visualize hypoxic microvasculature in cancerous tumors. A small lesion detected on optoacoustic image of a patient was not visible on ultrasound, potentially indicating high system sensitivity of the optoacoustic subsystem to small but aggressively growing cancerous lesions with high density angiogenesis microvasculature. The main breast vasculature (0.5-1 mm) was visible at depth of up to 40-mm with 0.3-mm resolution. The results of LOUISA-3D pilot clinical validation demonstrated the system readiness for statistically significant clinical feasibility study.

  10. Real-time 3-D X-ray and gamma-ray viewer

    NASA Technical Reports Server (NTRS)

    Yin, L. I. (Inventor)

    1983-01-01

    A multi-pinhole aperture lead screen forms an equal plurality of invisible mini-images having dissimilar perspectives of an X-ray and gamma-ray emitting object (ABC) onto a near-earth phosphor layer. This layer provides visible light mini-images directly into a visible light image intensifier. A viewing screen having an equal number of dissimilar perspective apertures distributed across its face in a geometric pattern identical to the lead screen, provides a viewer with a real, pseudoscopic image (A'B'C') of the object with full horizontal and vertical parallax. Alternatively, a third screen identical to viewing screen and spaced apart from a second visible light image intensifier, may be positioned between the first image intensifier and the viewing screen, thereby providing the viewer with a virtual, orthoscopic image (A"B"C") of the object (ABC) with full horizontal and vertical parallax.

  11. Viewing experience and naturalness of 3D images

    NASA Astrophysics Data System (ADS)

    Seuntiëns, Pieter J.; Heynderickx, Ingrid E.; IJsselsteijn, Wijnand A.; van den Avoort, Paul M. J.; Berentsen, Jelle; Dalm, Iwan J.; Lambooij, Marc T.; Oosting, Willem

    2005-11-01

    The term 'image quality' is often used to measure the performance of an imaging system. Recent research showed however that image quality may not be the most appropriate term to capture the evaluative processes associated with experiencing 3D images. The added value of depth in 3D images is clearly recognized when viewers judge image quality of unimpaired 3D images against their 2D counterparts. However, when viewers are asked to rate image quality of impaired 2D and 3D images, the image quality results for both 2D and 3D images are mainly determined by the introduced artefacts, and the addition of depth in the 3D images is hardly accounted for. In this experiment we applied and tested the more general evaluative concepts of 'naturalness' and 'viewing experience'. It was hypothesized that these concepts would better reflect the added value of depth in 3D images. Four scenes were used varying in dimension (2D and 3D) and noise level (6 levels of white gaussian noise). Results showed that both viewing experience and naturalness were rated higher in 3D than in 2D when the same noise level was applied. Thus, the added value of depth is clearly demonstrated when the concepts of viewing experience and naturalness are being evaluated. The added value of 3D over 2D, expressed in noise level, was 2 dB for viewing experience and 4 dB for naturalness, indicating that naturalness appears the more sensitive evaluative concept for demonstrating the psychological impact of 3D displays.

  12. S3D depth-axis interaction for video games: performance and engagement

    NASA Astrophysics Data System (ADS)

    Zerebecki, Chris; Stanfield, Brodie; Hogue, Andrew; Kapralos, Bill; Collins, Karen

    2013-03-01

    Game developers have yet to embrace and explore the interactive stereoscopic 3D medium. They typically view stereoscopy as a separate mode that can be disabled throughout the design process and rarely develop game mechanics that take advantage of the stereoscopic 3D medium. What if we designed games to be S3D-specific and viewed traditional 2D viewing as a separate mode that can be disabled? The design choices made throughout such a process may yield interesting and compelling results. Furthermore, we believe that interaction within a stereoscopic 3D environment is more important than the visual experience itself and therefore, further exploration is needed to take into account the interactive affordances presented by stereoscopic 3D displays. Stereoscopic 3D displays allow players to perceive objects at different depths, thus we hypothesize that designing a core mechanic to take advantage of this viewing paradigm will create compelling content. In this paper, we describe Z-Fighter a game that we have developed that requires the player to interact directly along the stereoscopic 3D depth axis. We also outline an experiment conducted to investigate the performance, perception, and enjoyment of this game in stereoscopic 3D vs. traditional 2D viewing.

  13. Persistent aerial video registration and fast multi-view mosaicing.

    PubMed

    Molina, Edgardo; Zhu, Zhigang

    2014-05-01

    Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.

  14. A near-real-time full-parallax holographic display for remote operations

    NASA Technical Reports Server (NTRS)

    Iavecchia, Helene P.; Huff, Lloyd; Marzwell, Neville I.

    1991-01-01

    A near real-time, full parallax holographic display system was developed that has the potential to provide a 3-D display for remote handling operations in hazardous environments. The major components of the system consist of a stack of three spatial light modulators which serves as the object source of the hologram; a near real-time holographic recording material (such as thermoplastic and photopolymer); and an optical system for relaying SLM images to the holographic recording material and to the observer for viewing.

  15. Compact 3D Camera for Shake-the-Box Particle Tracking

    NASA Astrophysics Data System (ADS)

    Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan

    2017-11-01

    Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.

  16. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  17. Autostereoscopic image creation by hyperview matrix controlled single pixel rendering

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2017-06-01

    Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.

  18. ARMOR3D: A 3D multi-observations T,S,U,V product of the ocean

    NASA Astrophysics Data System (ADS)

    Verbrugge, Nathalie; Mulet, Sandrine; Guinehut, Stéphanie; Buongiorno-Nardelli, Bruno

    2017-04-01

    To have a synoptic view of the 3D ocean to pursue oceanic studies, an observed gridded product can be often useful instead of using raw observations which can be irregularly distributed in space and time as the in situ profiles for instance or which offer only a surface view of the ocean as satellite data. The originality of the ARMOR3D observation based product is to take advantage of the strengths of these 2 types of data by combining satellite SLA, SST, SSS datasets with in situ T, S vertical profiles in order to build a global 3D weekly temperature, salinity and geostrophic velocities fields at a spatial 1/4° resolution. The mesoscale content of the satellite data and the vertical sampling of the in situ profiles are complementary in this statistical approach. ARMOR3D is part of the CMEMS project through the GLO-OBS component. A full reprocessing from 1993 to 2016 and near-real-time fields from 1/1/2014 to present are available through the CMEMS web portal. The range of applications of this product is wide: OSE studies have been already conducted to evaluate the ARGO network and in 2017, OSE and OSSE will be performed in the western Tropical Pacific as part of the TPOS2020 project (Tropical Pacific Observing System for 2020 Pacific). The product is useful also to study mesoscale eddies characteristics as well as links with the biogeochemical processes. For example, in 2015, ARMOR3D fields have been used as inputs of a micronekton model within the framework of the ESA OSMOSIS Project. Furthermore, ARMOR3D also contributes to the annual CMEMS Ocean State Report.

  19. Visualizing UAS-collected imagery using augmented reality

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  20. Influence of 2D and 3D view on performance and time estimation in minimal invasive surgery.

    PubMed

    Blavier, A; Nyssen, A S

    2009-11-01

    This study aimed to evaluate the impact of two-dimensional (2D) and three-dimensional (3D) images on time performance and time estimation during a surgical motor task. A total of 60 subjects without any surgical experience (nurses) and 20 expert surgeons performed a fine surgical task with a new laparoscopic technology (da Vinci robotic system). The 80 subjects were divided into two groups, one using 3D view option and the other using 2D view option. We measured time performance and asked subjects to verbally estimate their time performance. Our results showed faster performance in 3D than in 2D view for novice subjects while the performance in 2D and 3D was similar in the expert group. We obtained a significant interaction between time performance and time evaluation: in 2D condition, all subjects accurately estimated their time performance while they overestimated it in the 3D condition. Our results emphasise the role of 3D in improving performance and the contradictory feeling about time evaluation in 2D and 3D. This finding is discussed in regard with the retrospective paradigm and suggests that 2D and 3D images are differently processed and memorised.

  1. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  2. Direct comparison of cardiac magnetic resonance feature tracking and 2D/3D echocardiography speckle tracking for evaluation of global left ventricular strain.

    PubMed

    Obokata, Masaru; Nagata, Yasufumi; Wu, Victor Chien-Chia; Kado, Yuichiro; Kurabayashi, Masahiko; Otsuji, Yutaka; Takeuchi, Masaaki

    2016-05-01

    Cardiac magnetic resonance (CMR) feature tracking (FT) with steady-state free precession (SSFP) has advantages over traditional myocardial tagging to analyse left ventricular (LV) strain. However, direct comparisons of CMRFT and 2D/3D echocardiography speckle tracking (2/3DEST) for measurement of LV strain are limited. The aim of this study was to investigate the feasibility and reliability of CMRFT and 2D/3DEST for measurement of global LV strain. We enrolled 106 patients who agreed to undergo both CMR and 2D/3DE on the same day. SSFP images at multiple short-axis and three apical views were acquired. 2DE images from three levels of short-axis, three apical views, and 3D full-volume datasets were also acquired. Strain data were expressed as absolute values. Feasibility was highest in CMRFT, followed by 2DEST and 3DEST. Analysis time was shortest in 3DEST, followed by CMRFT and 2DEST. There was good global longitudinal strain (GLS) correlation between CMRFT and 2D/3DEST (r = 0.83 and 0.87, respectively) with the limit of agreement (LOA) ranged from ±3.6 to ±4.9%. Excellent global circumferential strain (GCS) correlation between CMRFT and 2D/3DEST was observed (r = 0.90 and 0.88) with LOA of ±6.8-8.5%. Global radial strain showed fair correlations (r = 0.69 and 0.82, respectively) with LOA ranged from ±12.4 to ±16.3%. CMRFT GCS showed least observer variability with highest intra-class correlation. Although not interchangeable, the high GLS and GCS correlation between CMRFT and 2D/3DEST makes CMRFT a useful modality for quantification of global LV strain in patients, especially those with suboptimal echo image quality. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  3. Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

    NASA Astrophysics Data System (ADS)

    Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo

    2010-02-01

    A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

  4. Viewing region maximization of an integral floating display through location adjustment of viewing window.

    PubMed

    Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho

    2007-10-01

    Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.

  5. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  6. Impact of simulated three-dimensional perception on precision of depth judgements, technical performance and perceived workload in laparoscopy.

    PubMed

    Sakata, S; Grove, P M; Hill, A; Watson, M O; Stevenson, A R L

    2017-07-01

    This study compared precision of depth judgements, technical performance and workload using two-dimensional (2D) and three-dimensional (3D) laparoscopic displays across different viewing distances. It also compared the accuracy of 3D displays with natural viewing, along with the relationship between stereoacuity and 3D laparoscopic performance. A counterbalanced within-subjects design with random assignment to testing sequences was used. The system could display 2D or 3D images with the same set-up. A Howard-Dolman apparatus assessed precision of depth judgements, and three laparoscopic tasks (peg transfer, navigation in space and suturing) assessed performance (time to completion). Participants completed tasks in all combinations of two viewing modes (2D, 3D) and two viewing distances (1 m, 3 m). Other measures administered included the National Aeronautics and Space Administration Task Load Index (perceived workload) and the Randot ® Stereotest (stereoacuity). Depth judgements were 6·2 times as precise at 1 m and 3·0 times as precise at 3 m using 3D versus 2D displays (P < 0·001). Participants performed all laparoscopic tasks faster in 3D at both 1 and 3 m (P < 0.001), with mean completion times up to 64 per cent shorter for 3D versus 2D displays. Workload was lower for 3D displays (up to 34 per cent) than for 2D displays at both viewing distances (P < 0·001). Greater viewing distance inhibited performance for two laparoscopic tasks, and increased perceived workload for all three (P < 0·001). Higher stereoacuity was associated with shorter completion times for the navigating in space task performed in 3D at 1 m (r = - 0·40, P = 0·001). 3D displays offer large improvements over 2D displays in precision of depth judgements, technical performance and perceived workload. © 2017 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.

  7. Digital 3D holographic display using scattering layers for enhanced viewing angle and image size

    NASA Astrophysics Data System (ADS)

    Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun

    2017-05-01

    In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.

  8. Optimizing the beam pattern of a forward-viewing ring-annular ultrasound array for intravascular imaging.

    PubMed

    Wang, Yao; Stephens, Douglas N; O'Donnell, Matthew

    2002-12-01

    Intravascular ultrasound (IVUS) imaging systems using circumferential arrays mounted on cardiac catheter tips fire beams orthogonal to the principal axis of the catheter. The system produces high resolution cross-sectional images but must be guided by conventional angioscopy. A real-time forward-viewing array, integrated into the same catheter, could greatly reduce radiation exposure by decreasing angiographic guidance. Unfortunately, the mounting requirement of a catheter guide wire prohibits a full-disk imaging aperture. Given only an annulus of array elements, prior theoretical investigations have only considered a circular ring of point transceivers and focusing strategies using all elements in the highly dense array, both impractical assumptions. In this paper, we consider a practical array geometry and signal processing architecture for a forward-viewing IVUS system. Our specific design uses a total of 210 transceiver firings with synthetic reconstruction for a given 3-D image frame. Simulation results demonstrate this design can achieve side-lobes under -40 dB for on-axis situations and under -30 dB for steering to the edge of a 80 degrees cone.

  9. Design and fabrication of directional diffractive device on glass substrate for multiview holographic 3D display

    NASA Astrophysics Data System (ADS)

    Su, Yanfeng; Cai, Zhijian; Liu, Quan; Zou, Wenlong; Guo, Peiliang; Wu, Jianhong

    2018-01-01

    Multiview holographic 3D display based on the nano-grating patterned directional diffractive device can provide 3D images with high resolution and wide viewing angle, which has attracted considerable attention. However, the current directional diffractive device fabricated on the photoresist is vulnerable to damage, which will lead to the short service life of the device. In this paper, we propose a directional diffractive device on glass substrate to increase its service life. In the design process, the period and the orientation of the nano-grating at each pixel are carefully calculated accordingly by the predefined position of the viewing zone, and the groove parameters are designed by analyzing the diffraction efficiency of the nano-grating pixel on glass substrate. In the experiment, a 4-view photoresist directional diffractive device with a full coverage of pixelated nano-grating arrays is efficiently fabricated by using an ultraviolet continuously variable spatial frequency lithography system, and then the nano-grating patterns on the photoresist are transferred to the glass substrate by combining the ion beam etching and the reactive ion beam etching for controlling the groove parameters precisely. The properties of the etched glass device are measured under the illumination of a collimated laser beam with a wavelength of 532nm. The experimental results demonstrate that the light utilization efficiency is improved and optimized in comparison with the photoresist device. Furthermore, the fabricated device on glass substrate is easier to be replicated and of better durability and practicability, which shows great potential in the commercial applications of 3D display terminal.

  10. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  11. TU-CD-207-09: Analysis of the 3-D Shape of Patients’ Breast for Breast Imaging and Surgery Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agasthya, G; Sechopoulos, I

    2015-06-15

    Purpose: Develop a method to accurately capture the 3-D shape of patients’ external breast surface before and during breast compression for mammography/tomosynthesis. Methods: During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3-D breast surface imaging during breast compression and imaging for the cranio-caudal (CC) view on a digital mammography/breast tomosynthesis system. Digital projectors and cameras mounted on tripods were used to acquire 3-D surface images of the breast, in three conditions: (a) positioned on the support paddle before compression, (b) during compression by the compression paddle and (c) the anterior-posterior view with the breast in its natural,more » unsupported position. The breast was compressed to standard full compression with the compression paddle and a tomosynthesis image was acquired simultaneously with the 3-D surface. The 3-D surface curvature and deformation with respect to the uncompressed surface was analyzed using contours. The 3-D surfaces were voxelized to capture breast shape in a format that can be manipulated for further analysis. Results: A protocol was developed to accurately capture the 3-D shape of patients’ breast before and during compression for mammography. Using a pair of 3-D scanners, the 50 patient breasts were scanned in three conditions, resulting in accurate representations of the breast surfaces. The surfaces were post processed, analyzed using contours and voxelized, with 1 mm{sup 3} voxels, converting the breast shape into a format that can be easily modified as required. Conclusion: Accurate characterization of the breast curvature and shape for the generation of 3-D models is possible. These models can be used for various applications such as improving breast dosimetry, accurate scatter estimation, conducting virtual clinical trials and validating compression algorithms. Ioannis Sechopoulos is consultant for Fuji Medical Systems USA.« less

  12. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  13. WE-G-BRD-01: A Data-Driven 4D-MRI Motion Model to Estimate Full Field-Of-View Abdominal Motion From 2D Image Navigators During MR-Linac Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stemkens, B; Tijssen, RHN; Denis de Senneville, B Denis

    2015-06-15

    Purpose: To estimate full field-of-view abdominal respiratory motion from fast 2D image navigators using a 4D-MRI based motion model. This will allow for radiation dose accumulation mapping during MR-Linac treatment. Methods: Experiments were conducted on a Philips Ingenia 1.5T MRI. First, a retrospectively ordered 4D-MRI was constructed using 3D transient-bSSFP with radial in-plane sampling. Motion fields were calculated through 3D non-rigid registration. From these motion fields a PCA-based abdominal motion model was constructed and used to warp a 3D reference volume to fast 2D cine-MR image navigators that can be used for real-time tracking. To test this procedure, a time-seriesmore » consisting of two interleaved orthogonal slices (sagittal and coronal), positioned on the pancreas or kidneys, were acquired for 1m38s (dynamic scan-time=0.196ms), during normal, shallow, or deep breathing. The coronal slices were used to update the optimal weights for the first two PCA components, in order to warp the 3D reference image and construct a dynamic 4D-MRI time-series. The interleaved sagittal slices served as an independent measure to test the model’s accuracy and fit. Spatial maps of the root-mean-squared error (RMSE) and histograms of the motion differences within the pancreas and kidneys were used to evaluate the method. Results: Cranio-caudal motion was accurately calculated within the pancreas using the model for normal and shallow breathing with an RMSE of 1.6mm and 1.5mm and a histogram median and standard deviation below 0.2 and 1.7mm, respectively. For deep-breathing an underestimation of the inhale amplitude was observed (RMSE=4.1mm). Respiratory-induced antero-posterior and lateral motion were correctly mapped (RMSE=0.6/0.5mm). Kidney motion demonstrated good motion estimation with RMSE-values of 0.95 and 2.4mm for the right and left kidney, respectively. Conclusion: We have demonstrated a method that can calculate dynamic 3D abdominal motion in a large volume, while acquiring real-time cine-MR images for MR-guided radiotherapy.« less

  14. Optimization of Transmit Parameters in Cardiac Strain Imaging With Full and Partial Aperture Coherent Compounding.

    PubMed

    Sayseng, Vincent; Grondin, Julien; Konofagou, Elisa E

    2018-05-01

    Coherent compounding methods using the full or partial transmit aperture have been investigated as a possible means of increasing strain measurement accuracy in cardiac strain imaging; however, the optimal transmit parameters in either compounding approach have yet to be determined. The relationship between strain estimation accuracy and transmit parameters-specifically the subaperture, angular aperture, tilt angle, number of virtual sources, and frame rate-in partial aperture (subaperture compounding) and full aperture (steered compounding) fundamental mode cardiac imaging was thus investigated and compared. Field II simulation of a 3-D cylindrical annulus undergoing deformation and twist was developed to evaluate accuracy of 2-D strain estimation in cross-sectional views. The tradeoff between frame rate and number of virtual sources was then investigated via transthoracic imaging in the parasternal short-axis view of five healthy human subjects, using the strain filter to quantify estimation precision. Finally, the optimized subaperture compounding sequence (25-element subperture, 90° angular aperture, 10 virtual sources, 300-Hz frame rate) was compared to the optimized steered compounding sequence (60° angular aperture, 15° tilt, 10 virtual sources, 300-Hz frame rate) via transthoracic imaging of five healthy subjects. Both approaches were determined to estimate cumulative radial strain with statistically equivalent precision (subaperture compounding E(SNRe %) = 3.56, and steered compounding E(SNRe %) = 4.26).

  15. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.

  16. Gestalt-like constraints produce veridical (Euclidean) percepts of 3D indoor scenes

    PubMed Central

    Kwon, TaeKyu; Li, Yunfeng; Sawada, Tadamasa; Pizlo, Zygmunt

    2015-01-01

    This study, which was influenced a lot by Gestalt ideas, extends our prior work on the role of a priori constraints in the veridical perception of 3D shapes to the perception of 3D scenes. Our experiments tested how human subjects perceive the layout of a naturally-illuminated indoor scene that contains common symmetrical 3D objects standing on a horizontal floor. In one task, the subject was asked to draw a top view of a scene that was viewed either monocularly or binocularly. The top views the subjects reconstructed were configured accurately except for their overall size. These size errors varied from trial to trial, and were shown most-likely to result from the presence of a response bias. There was little, if any, evidence of systematic distortions of the subjects’ perceived visual space, the kind of distortions that have been reported in numerous experiments run under very unnatural conditions. This shown, we proceeded to use Foley’s (Vision Research 12 (1972) 323–332) isosceles right triangle experiment to test the intrinsic geometry of visual space directly. This was done with natural viewing, with the impoverished viewing conditions Foley had used, as well as with a number of intermediate viewing conditions. Our subjects produced very accurate triangles when the viewing conditions were natural, but their performance deteriorated systematically as the viewing conditions were progressively impoverished. Their perception of visual space became more compressed as their natural visual environment was degraded. Once this was shown, we developed a computational model that emulated the most salient features of our psychophysical results. We concluded that human observers see 3D scenes veridically when they view natural 3D objects within natural 3D environments. PMID:26525845

  17. LG-ANALYST: linguistic geometry for master air attack planning

    NASA Astrophysics Data System (ADS)

    Stilman, Boris; Yakhnis, Vladimir; Umanskiy, Oleg

    2003-09-01

    We investigate the technical feasibility of implementing LG-ANALYST, a new software tool based on the Linguistic Geometry (LG) approach. The tool will be capable of modeling and providing solutions to Air Force related battlefield problems and of conducting multiple experiments to verify the quality of the solutions it generates. LG-ANALYST will support generation of the Fast Master Air Attack Plan (MAAP) with subsequent conversion into Air Tasking Order (ATO). An Air Force mission is modeled employing abstract board games (ABG). Such a mission may include, for example, an aircraft strike package moving to a target area with the opposing side having ground-to-air missiles, anti-aircraft batteries, fighter wings, and radars. The corresponding abstract board captures 3D air space, terrain, the aircraft trajectories, positions of the batteries, strategic features of the terrain, such as bridges, and their status, radars and illuminated space, etc. Various animated views are provided by LG-ANALYST including a 3D view for realistic representation of the battlespace and a 2D view for ease of analysis and control. LG-ANALYST will allow a user to model full scale intelligent enemy, plan in advance, re-plan and control in real time Blue and Red forces by generating optimal (or near-optimal) strategies for all sides of a conflict.

  18. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  19. Numerical investigation on the viewing angle of a lenticular three-dimensional display with a triplet lens array.

    PubMed

    Kim, Hwi; Hahn, Joonku; Choi, Hee-Jin

    2011-04-10

    We investigate the viewing angle enhancement of a lenticular three-dimensional (3D) display with a triplet lens array. The theoretical limitations of the viewing angle and view number of the lenticular 3D display with the triplet lens array are analyzed numerically. For this, the genetic-algorithm-based design method of the triplet lens is developed. We show that a lenticular 3D display with viewing angle of 120° and 144 views without interview cross talk can be realized with the use of an optimally designed triplet lens array. © 2011 Optical Society of America

  20. The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.

    PubMed

    Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L

    2016-11-01

    Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.

  1. View subspaces for indexing and retrieval of 3D models

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel

    2010-02-01

    View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.

  2. Exploring point-cloud features from partial body views for gender classification

    NASA Astrophysics Data System (ADS)

    Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga

    2012-06-01

    In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further investigation of more complex partial body viewing problems and new methods for estimating the two position coordinates for the axis location and the unknown body orientation angle.

  3. Full toroidal imaging of non-axisymmetric plasma material interaction in the National Spherical Torus Experiment divertor.

    PubMed

    Scotti, Filippo; Roquemore, A L; Soukhanovskii, V A

    2012-10-01

    A pair of two dimensional fast cameras with a wide angle view (allowing a full radial and toroidal coverage of the lower divertor) was installed in the National Spherical Torus Experiment in order to monitor non-axisymmetric effects. A custom polar remapping procedure and an absolute photometric calibration enabled the easier visualization and quantitative analysis of non-axisymmetric plasma material interaction (e.g., strike point splitting due to application of 3D fields and effects of toroidally asymmetric plasma facing components).

  4. Impact of 2D and 3D vision on performance of novice subjects using da Vinci robotic system.

    PubMed

    Blavier, A; Gaudissart, Q; Cadière, G B; Nyssen, A S

    2006-01-01

    The aim of this study was to evaluate the impact of 3D and 2D vision on performance of novice subjects using da Vinci robotic system. 224 nurses without any surgical experience were divided into two groups and executed a motor task with the robotic system in 2D for one group and with the robotic system in 3D for the other group. Time to perform the task was recorded. Our data showed significant better time performance in 3D view (24.67 +/- 11.2) than in 2D view (40.26 +/- 17.49, P < 0.001). Our findings emphasized the advantage of 3D vision over 2D view in performing surgical task, encouraging the development of efficient and less expensive 3D systems in order to improve the accuracy of surgical gesture, the resident training and the operating time.

  5. Dynamic accommodative response to different visual stimuli (2D vs 3D) while watching television and while playing Nintendo 3DS console.

    PubMed

    Oliveira, Sílvia; Jorge, Jorge; González-Méijome, José M

    2012-09-01

    The aim of the present study was to compare the accommodative response to the same visual content presented in two dimensions (2D) and stereoscopically in three dimensions (3D) while participants were either watching a television (TV) or Nintendo 3DS console. Twenty-two university students, with a mean age of 20.3 ± 2.0 years (mean ± S.D.), were recruited to participate in the TV experiment and fifteen, with a mean age of 20.1 ± 1.5 years took part in the Nintendo 3DS console study. The accommodative response was measured using a Grand Seiko WAM 5500 autorefractor. In the TV experiment, three conditions were used initially: the film was viewed in 2D mode (TV2D without glasses), the same sequence was watched in 2D whilst shutter-glasses were worn (TV2D with glasses) and the sequence was viewed in 3D mode (TV3D). Measurements were taken for 5 min in each condition, and these sections were sub-divided into ten 30-s segments to examine changes within the film. In addition, the accommodative response to three points of different disparity of one 3D frame was assessed for 30 s. In the Nintendo experiment, two conditions were employed - 2D viewing and stereoscopic 3D viewing. In the TV experiment no statistically significant differences were found between the accommodative response with TV2D without glasses (-0.38 ± 0.32D, mean ± S.D.) and TV3D (-0.37 ± 0.34D). Also, no differences were found between the various segments of the film, or between the accommodative response to different points of one frame (p > 0.05). A significant difference (p = 0.015) was found, however, between the TV2D with (-0.32 ± 0.32D) and without glasses (-0.38 ± 0.32D). In the Nintendo experiment the accommodative responses obtained in modes 2D (-2.57 ± 0.30D) and 3D (-2.49 ± 0.28D) were significantly different (paired t-test p = 0.03). The need to use shutter-glasses may affect the accommodative response during the viewing of displays, and the accommodative response when playing Nintendo 3DS in 3D mode is lower than when it is viewed in 2D. © 2012 The College of Optometrists.

  6. Super long viewing distance light homogeneous emitting three-dimensional display

    NASA Astrophysics Data System (ADS)

    Liao, Hongen

    2015-04-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.

  7. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  8. Vitamin D and gene networks in human osteoblasts

    PubMed Central

    van de Peppel, Jeroen; van Leeuwen, Johannes P. T. M.

    2014-01-01

    Bone formation is indirectly influenced by 1,25-dihydroxyvitamin D3 (1,25D3) through the stimulation of calcium uptake in the intestine and re-absorption in the kidneys. Direct effects on osteoblasts and bone formation have also been established. The vitamin D receptor (VDR) is expressed in osteoblasts and 1,25D3 modifies gene expression of various osteoblast differentiation and mineralization-related genes, such as alkaline phosphatase (ALPL), osteocalcin (BGLAP), and osteopontin (SPP1). 1,25D3 is known to stimulate mineralization of human osteoblasts in vitro, and recently it was shown that 1,25D3 induces mineralization via effects in the period preceding mineralization during the pre-mineralization period. For a full understanding of the action of 1,25D3 in osteoblasts it is important to get an integrated network view of the 1,25D3-regulated genes during osteoblast differentiation and mineralization. The current data will be presented and discussed alluding to future studies to fully delineate the 1,25D3 action in osteoblast. Describing and understanding the vitamin D regulatory networks and identifying the dominant players in these networks may help develop novel (personalized) vitamin D-based treatments. The following topics will be discussed in this overview: (1) Bone metabolism and osteoblasts, (2) Vitamin D, bone metabolism and osteoblast function, (3) Vitamin D induced transcriptional networks in the context of osteoblast differentiation and bone formation. PMID:24782782

  9. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351

  10. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  11. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  12. Dual-view inverted selective plane illumination microscopy (diSPIM) with improved background rejection for accurate 3D digital pathology

    NASA Astrophysics Data System (ADS)

    Hu, Bihe; Bolus, Daniel; Brown, J. Quincy

    2018-02-01

    Current gold-standard histopathology for cancerous biopsies is destructive, time consuming, and limited to 2D slices, which do not faithfully represent true 3D tumor micro-morphology. Light sheet microscopy has emerged as a powerful tool for 3D imaging of cancer biospecimens. Here, we utilize the versatile dual-view inverted selective plane illumination microscopy (diSPIM) to render digital histological images of cancer biopsies. Dual-view architecture enabled more isotropic resolution in X, Y, and Z; and different imaging modes, such as adding electronic confocal slit detection (eCSD) or structured illumination (SI), can be used to improve degraded image quality caused by background signal of large, scattering samples. To obtain traditional H&E-like images, we used DRAQ5 and eosin (D&E) staining, with 488nm and 647nm laser illumination, and multi-band filter sets. Here, phantom beads and a D&E stained buccal cell sample have been used to verify our dual-view method. We also show that via dual view imaging and deconvolution, more isotropic resolution has been achieved for optical cleared human prostate sample, providing more accurate quantitation of 3D tumor architecture than was possible with single-view SPIM methods. We demonstrate that the optimized diSPIM delivers more precise analysis of 3D cancer microarchitecture in human prostate biopsy than simpler light sheet microscopy arrangements.

  13. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    PubMed

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  15. 3. View from behind (D) fourroom cabin, showing relationship between ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. View from behind (D) four-room cabin, showing relationship between it and l(A) mansion. View looking north-northeast. - Fort Hill Farm, Four-Room Cabin, West of Staunton (Roanoke) River between Turkey & Caesar's Runs, Clover, Halifax County, VA

  16. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  17. 2D-3D registration for cranial radiation therapy using a 3D kV CBCT and a single limited field-of-view 2D kV radiograph.

    PubMed

    Munbodh, Reshma; Knisely, Jonathan Ps; Jaffray, David A; Moseley, Douglas J

    2018-05-01

    We present and evaluate a fully automated 2D-3D intensity-based registration framework using a single limited field-of-view (FOV) 2D kV radiograph and a 3D kV CBCT for 3D estimation of patient setup errors during brain radiotherapy. We evaluated two similarity measures, the Pearson correlation coefficient on image intensity values (ICC) and maximum likelihood measure with Gaussian noise (MLG), derived from the statistics of transmission images. Pose determination experiments were conducted on 2D kV radiographs in the anterior-posterior (AP) and left lateral (LL) views and 3D kV CBCTs of an anthropomorphic head phantom. In order to minimize radiation exposure and exclude nonrigid structures from the registration, limited FOV 2D kV radiographs were employed. A spatial frequency band useful for the 2D-3D registration was identified from the bone-to-no-bone spectral ratio (BNBSR) of digitally reconstructed radiographs (DRRs) computed from the 3D kV planning CT of the phantom. The images being registered were filtered accordingly prior to computation of the similarity measures. We evaluated the registration accuracy achievable with a single 2D kV radiograph and with the registration results from the AP and LL views combined. We also compared the performance of the 2D-3D registration solutions proposed to that of a commercial 3D-3D registration algorithm, which used the entire skull for the registration. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The accuracy of the 2D-3D registration solutions, as quantified by the root mean squared value of the target registration error (TRE) calculated over a radius of 3 cm for all poses tested, was ICC AP : 0.56 mm, MLG AP : 0.74 mm, ICC LL : 0.57 mm, MLG LL : 0.54 mm, ICC (AP and LL combined): 0.19 mm, and MLG (AP and LL combined): 0.21 mm. The accuracy of the 3D-3D registration algorithm was 0.27 mm. There was no significant difference in mean TRE for the 2D-3D registration algorithms using a single 2D kV radiograph with similarity measure and image view point. There was no significant difference in mean TRE between ICC LL , MLG LL , ICC (AP and LL combined), MLG (AP and LL combined), and the 3D-3D registration algorithm despite the smaller FOV used for the 2D-3D registration. While submillimeter registration accuracy was obtained with both ICC and MLG using a single 2D kV radiograph, combining the results from the two projection views resulted in a significantly smaller (P≤0.05) mean TRE. Our results indicate that it is possible to achieve submillimeter registration accuracy with both ICC and MLG using either single or dual limited FOV 2D kV radiographs of the head in the AP and LL views. The registration accuracy suggests that the 2D-3D registration solutions presented are suitable for the estimation of patient setup errors not only during conventional brain radiation therapy, but also during stereotactic procedures and proton radiation therapy where tighter setup margins are required. © 2018 American Association of Physicists in Medicine.

  18. Determination of depth-viewing volumes for stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.

    1990-01-01

    Real-world, 3-D, pictorial displays incorporating true depth cues via stereopsis techniques offer a potential means of displaying complex information in a natural way to prevent loss of situational awareness and provide increases in pilot/vehicle performance in advanced flight display concepts. Optimal use of stereopsis requires an understanding of the depth viewing volume available to the display designer. Suggested guidelines are presented for the depth viewing volume from an empirical determination of the effective region of stereopsis cueing (at several viewer-CRT screen distances) for a time multiplexed stereopsis display system. The results provide the display designer with information that will allow more effective placement of depth information to enable the full exploitation of stereopsis cueing. Increasing viewer-CRT screen distances provides increasing amounts of usable depth, but with decreasing fields-of-view. A stereopsis hardware system that permits an increased viewer-screen distance by incorporating larger screen sizes or collimation optics to maintain the field-of-view at required levels would provide a much larger stereo depth-viewing volume.

  19. 3D-printed eagle eye: Compound microlens system for foveated imaging

    PubMed Central

    Thiele, Simon; Arzenbacher, Kathrin; Gissibl, Timo; Giessen, Harald; Herkommer, Alois M.

    2017-01-01

    We present a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor. Our system combines four printed doublet lenses with different focal lengths (equivalent to f = 31 to 123 mm for a 35-mm film) in a 2 × 2 arrangement to achieve a full field of view of 70° with an increasing angular resolution of up to 2 cycles/deg field of view in the center of the image. The footprint of the optics on the chip is below 300 μm × 300 μm, whereas their height is <200 μm. Because the four lenses are printed in one single step without the necessity for any further assembling or alignment, this approach allows for fast design iterations and can lead to a plethora of different miniaturized multiaperture imaging systems with applications in fields such as endoscopy, optical metrology, optical sensing, surveillance drones, or security. PMID:28246646

  20. Fisheye camera around view monitoring system

    NASA Astrophysics Data System (ADS)

    Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen

    2018-04-01

    360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.

  1. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    technique to present multiple 2D views was devised by D. Asimov . He assembled multiple two dimensional scatter plot views of the hyper dimensional...Viewing Multidimensional Data”, D. Asimov , DIAM Journal on Scientific and Statistical Computing, vol.61, pp.128-143, 1985. [2] “High-Dimensional

  2. Report for Task 8.4: Development of Control Room Layout Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, Robert

    Idaho National Laboratory (INL) has contracted Institutt for Energiteknikk (IFE) to support in the development of an end state vision for the US Nuclear industry and in particular for a utility that is currently moving forward with a control room modernization project. This support includes the development of an Overview display and technical support in conducting an operational study. Development of operational scenarios to be conducted using a full scope simulator at the INL HSSL. Additionally IFE will use the CREATE modelling tool to provide 3-D views of the potential and possible end state view after the completion of digitalmore » upgrade project.« less

  3. First responder tracking and visualization for command and control toolkit

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Petrov, Plamen; Meisinger, Roger

    2010-04-01

    In order for First Responder Command and Control personnel to visualize incidents at urban building locations, DHS sponsored a small business research program to develop a tool to visualize 3D building interiors and movement of First Responders on site. 21st Century Systems, Inc. (21CSI), has developed a toolkit called Hierarchical Grid Referenced Normalized Display (HiGRND). HiGRND utilizes three components to provide a full spectrum of visualization tools to the First Responder. First, HiGRND visualizes the structure in 3D. Utilities in the 3D environment allow the user to switch between views (2D floor plans, 3D spatial, evacuation routes, etc.) and manually edit fast changing environments. HiGRND accepts CAD drawings and 3D digital objects and renders these in the 3D space. Second, HiGRND has a First Responder tracker that uses the transponder signals from First Responders to locate them in the virtual space. We use the movements of the First Responder to map the interior of structures. Finally, HiGRND can turn 2D blueprints into 3D objects. The 3D extruder extracts walls, symbols, and text from scanned blueprints to create the 3D mesh of the building. HiGRND increases the situational awareness of First Responders and allows them to make better, faster decisions in critical urban situations.

  4. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited].

    PubMed

    Xiao, Xiao; Javidi, Bahram; Martinez-Corral, Manuel; Stern, Adrian

    2013-02-01

    Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.

  5. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  6. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    PubMed

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  7. Role of stereoscopic imaging in the astronomical study of nearby stars and planetary systems

    NASA Astrophysics Data System (ADS)

    Mark, David S.; Waste, Corby

    1997-05-01

    The development of stereoscopic imaging as a 3D spatial mapping tool for planetary science is now beginning to find greater usefulness in the study of stellar atmospheres and planetary systems in general. For the first time, telescopes and accompanying spectrometers have demonstrated the capacity to depict the gyrating motion of nearby stars so precisely as to derive the existence of closely orbiting Jovian-type planets, which are gravitationally influencing the motion of the parent star. Also for the first time, remote space borne telescopes, unhindered by atmospheric effects, are recording and tracking the rotational characteristics of our nearby star, the sun, so accurately as to reveal and identify in great detail the heightened turbulence of the sun's corona. In order to perform new forms of stereo imaging and 3D reconstruction with such large scale objects as stars and planets, within solar systems, a set of geometrical parameters must be observed, and are illustrated here. The behavior of nearby stars can be studied over time using an astrometric approach, making use of the earth's orbital path as a semi- yearly stereo base for the viewing telescope. As is often the case in this method, the resulting stereo angle becomes too narrow to afford a beneficial stereo view, given the star's distance and the general level of detected noise in the signal. With the advent, though, of new earth based and space borne interferometers, operating within various wavelengths including IR, the capability of detecting and assembling the full 3-dimensional axes of motion of nearby gyrating stars can be achieved. In addition, the coupling of large interferometers with combined data sets can provide large stereo bases and low signal noise to produce converging 3- dimensional stereo views of nearby planetary systems. Several groups of new astronomical stereo imaging data sets are presented, including 3D views of the sun taken by the Solar and Heliospheric Observatory, coincident stereo views of the planet Jupiter during impact of comet Shoemaker-Levy 9, taken by the Galileo spacecraft and the Hubble Space Telescope, as well as views of nearby stars. Spatial ambiguities arising in singular 2-dimensional viewpoints are shown to be resolvable in twin perspective, 3-dimensional stereo views. Stereo imaging of this nature, therefore, occupies a complementary role in astronomical observing, provided the proper fields of view correspond with the path of the orbital geometry of the observing telescope.

  8. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  9. Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study

    PubMed Central

    2017-01-01

    The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145–190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260–385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape. PMID:29022728

  10. Effect of mental fatigue caused by mobile 3D viewing on selective attention: an ERP study.

    PubMed

    Mun, Sungchul; Kim, Eun-Soo; Park, Min-Chul

    2014-12-01

    This study investigated behavioral responses to and auditory event-related potential (ERP) correlates of mental fatigue caused by mobile three-dimensional (3D) viewing. Twenty-six participants (14 women) performed a selective attention task in which they were asked to respond to the sounds presented at the attended side while ignoring sounds at the ignored side before and after mobile 3D viewing. Considering different individual susceptibilities to 3D, participants' subjective fatigue data were used to categorize them into two groups: fatigued and unfatigued. The amplitudes of d-ERP components were defined as differences in amplitudes between time-locked brain oscillations of the attended and ignored sounds, and these values were used to calculate the degree to which spatial selective attention was impaired by 3D mental fatigue. The fatigued group showed significantly longer response times after mobile 3D viewing compared to before the viewing. However, response accuracy did not significantly change between the two conditions, implying that the participants used a behavioral strategy to cope with their performance accuracy decrement by increasing their response times. No significant differences were observed for the unfatigued group. Analysis of covariance revealed group differences with significant and trends toward significant decreases in the d-P200 and d-late positive potential (LPP) amplitudes at the occipital electrodes of the fatigued and unfatigued groups. Our findings indicate that mentally fatigued participants did not effectively block out distractors in their information processing mechanism, providing support for the hypothesis that 3D mental fatigue impairs spatial selective attention and is characterized by changes in d-P200 and d-LPP amplitudes. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Power and Thermal Technology for Air and Space-Scientific Research Program Delivery Order 0003: Electrical Technology Component Development

    DTIC Science & Technology

    2007-03-01

    specific contact resistivity of Ti/AlNi/Au 24 21 The full view 3D model of the IGBT ………………………………….. 25 22 2D temperature distribution of the SiC...comprised of multiple materials. The representative geometry of a Si isolated gated bipolar transistor ( IGBT ) was chosen for the initial simulation...samples annealed at 650°C for 30 minutes in either the tube furnace with an oxygen gettering system or in the vacuum chamber, represented the superior

  12. Stereo View of Phoenix Test Sample Site

    NASA Image and Video Library

    2008-06-02

    This anaglyph image, acquired by NASA’s Phoenix Lander’s Surface Stereo Imager on June 1, 2008, shows a stereoscopic 3D view of the so-called Knave of Hearts first-dig test area to the north of the lander. 3D glasses are necessary to view this image.

  13. Three-dimensional/two-dimensional multiplanar stereotactic planning system: hardware and software configuration

    NASA Astrophysics Data System (ADS)

    Zamorano, Lucia J.; Dujovny, Manuel; Ausman, James I.

    1990-01-01

    "Real time" surgical treatment planning utilizing multimodality imaging (CT, MRI, DA) has been developed to provide the neurosurgeon with 2D multiplanar and 3D views of a patient's lesion for stereotactic planning. Both diagnostic and therapeutic stereotactic procedures have been implemented utilizing workstation (SUN 1/10) and specially developed software and hardware (developed in collaboration with TOMO Medical Imaging Technology, Southfield, MI). This provides complete 3D and 2D free-tilt views as part of the system instrumentation. The 2D Multiplanar includes reformatted sagittal, coronal, paraaxial and free tilt oblique vectors at any arbitrary plane of the patient's lesion. The 3D includes features for extracting a view of the target volume localized by a process including steps of automatic segmentation, thresholding, and/or boundary detection with 3D display of the volumes of interest. The system also includes the capability of interactive playback of reconstructed 3D movies, which can be viewed at any hospital network having compatible software on strategical locations or at remote sites through data transmission and record documentation by image printers. Both 2D and 3D menus include real time stereotactic coordinate measurements and trajectory definition capabilities as well as statistical functions for computing distances, angles, areas, and volumes. A combined interactive 3D-2D multiplanar menu allows simultaneous display of selected trajectory, final optimization, and multiformat 2D display of free-tilt reformatted images perpendicular to selected trajectory of the entire target volume.

  14. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  15. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  16. 3D medical collaboration technology to enhance emergency healthcare.

    PubMed

    Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E

    2009-04-19

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.

  17. Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang

    2014-08-01

    We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.

  18. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  19. Acceleration techniques and their impact on arterial input function sampling: Non-accelerated versus view-sharing and compressed sensing sequences.

    PubMed

    Benz, Matthias R; Bongartz, Georg; Froehlich, Johannes M; Winkel, David; Boll, Daniel T; Heye, Tobias

    2018-07-01

    The aim was to investigate the variation of the arterial input function (AIF) within and between various DCE MRI sequences. A dynamic flow-phantom and steady signal reference were scanned on a 3T MRI using fast low angle shot (FLASH) 2d, FLASH3d (parallel imaging factor (P) = P0, P2, P4), volumetric interpolated breath-hold examination (VIBE) (P = P0, P3, P2 × 2, P2 × 3, P3 × 2), golden-angle radial sparse parallel imaging (GRASP), and time-resolved imaging with stochastic trajectories (TWIST). Signal over time curves were normalized and quantitatively analyzed by full width half maximum (FWHM) measurements to assess variation within and between sequences. The coefficient of variation (CV) for the steady signal reference ranged from 0.07-0.8%. The non-accelerated gradient echo FLASH2d, FLASH3d, and VIBE sequences showed low within sequence variation with 2.1%, 1.0%, and 1.6%. The maximum FWHM CV was 3.2% for parallel imaging acceleration (VIBE P2 × 3), 2.7% for GRASP and 9.1% for TWIST. The FWHM CV between sequences ranged from 8.5-14.4% for most non-accelerated/accelerated gradient echo sequences except 6.2% for FLASH3d P0 and 0.3% for FLASH3d P2; GRASP FWHM CV was 9.9% versus 28% for TWIST. MRI acceleration techniques vary in reproducibility and quantification of the AIF. Incomplete coverage of the k-space with TWIST as a representative of view-sharing techniques showed the highest variation within sequences and might be less suited for reproducible quantification of the AIF. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  1. Further advances in autostereoscopic technology at Dimension Technologies Inc.

    NASA Astrophysics Data System (ADS)

    Eichenlaub, Jesse B.

    1992-06-01

    Dimension Technologies is currently one of three companies offering autostereoscopic displays for sale and one of several which are actively pursuing advances to the technology. We have devised a new autostereoscopic imaging technique which possesses several advantages over previously explored methods. We are currently manufacturing autostereoscopic displays based on this technology, as well as vigorously pursuing research and development toward more advanced displays. During the past year, DTI has made major strides in advancing its LCD based autostereoscopic display technology. DTI has developed a color product -- a stand alone 640 X 480 flat panel LCD based 3-D display capable of accepting input from IBM PC and Apple MAC computers or TV cameras, and capable of changing from 3-D mode to 2-D mode with the flip of a switch. DTI is working on development of a prototype second generation color product that will provide autostereoscopic 3-D while allowing each eye to see the full resolution of the liquid crystal display. And development is also underway on a proof-of-concept display which produces hologram-like look-around images visible from a wide viewing angle, again while allowing the observer to see the full resolution of the display from all locations. Development of a high resolution prototype display of this type has begun.

  2. [Temporal Analysis of Body Sway during Reciprocator Motion Movie Viewing].

    PubMed

    Sugiura, Akihiro; Tanaka, Kunihiko; Wakatabe, Shun; Matsumoto, Chika; Miyao, Masaru

    2016-01-01

    We aimed to investigate the effect of stereoscopic viewing and the degree of awareness of motion sickness on posture by measuring body sway during motion movie viewing. Nineteen students (12 men and 7 women; age range, 21-24 years) participated in this study. The movie, which showed several balls randomly positioned, was projected on a white wall 2 m in front of the subjects through a two-dimensional (2-D)/three-dimensional (3-D) convertible projector. To measure body sway during movie viewing, the subjects stood statically erect on a Wii balance board, with the toe opening at 18 degrees. The study protocol was as follows: The subjects watched (1) a nonmoving movie for 1 minute as the pretest and then (2) a round-trip sinusoidally moving-in-depth-direction movie for 3 minutes. (3) The initial static movie was shown again for 1 minute. Steps (2) and (3) were treated as one trial, after which two trials (2-D and 3-D movies) were performed in a random sequence. In this study, we found that posture changed according to the motion in the movie and that the longer the viewing time, the higher the synchronization accuracy. These tendencies depended on the level of awareness of motion sickness or the 3-D movie viewed. The mechanism of postural change in movie viewing was not vection but self-defense to resolve sensory conflict between visual information (spatial swing) and equilibrium sense (motionlessness).

  3. NASA Spacecraft Captures 3-D View of Massive Australian Wildfire

    NASA Image and Video Library

    2013-02-05

    This 3-D view was created from data acquired Feb. 4, 2013 by NASA Terra spacecraft showing a massive wildfire which damaged Australia largest optical astronomy facility, the Siding Spring Observatory.

  4. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  5. Wide field-of-view bifocal eyeglasses

    NASA Astrophysics Data System (ADS)

    Barbero, Sergio; Rubinstein, Jacob

    2015-09-01

    When vision is affected simultaneously by presbyopia and myopia or hyperopia, a solution based on eyeglasses implies a surface with either segmented focal regions (e.g. bifocal lenses) or a progressive addition profile (PALs). However, both options have the drawback of reducing the field-of-view for each power position, which restricts the natural eye-head movements of the wearer. To avoid this serious limitation we propose a new solution which is essentially a bifocal power-adjustable optical design ensuring a wide field-of-view for every viewing distance. The optical system is based on the Alvarez principle. Spherical refraction correction is considered for different eccentric gaze directions covering a field-of-view range up to 45degrees. Eye movements during convergence for near objects are included. We designed three bifocal systems. The first one provides 3 D for far vision (myopic eye) and -1 D for near vision (+2 D Addition). The second one provides a +3 D addition with 3 D for far vision. Finally the last system is an example of reading glasses with +1 D power Addition.

  6. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous approaches, so there is no image jitter, and has an inherent parallel mechanism for 3D voxel addressing. High spatial resolution is possible with a full color display being easy to implement. The system is low-cost and low-maintenance.

  7. Optical mapping of conduction in early embryonic quail hearts with light-sheet microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ma, Pei; Gu, Shi; Wang, Yves T.; Jenkins, Michael W.; Rollins, Andrew M.

    2016-03-01

    Optical mapping (OM) using fluorescent voltage-sensitive dyes (VSD) to measure membrane potential is currently the most effective method for electrophysiology studies in early embryonic hearts due to its noninvasiveness and large field-of-view. Conventional OM acquires bright-field images, collecting signals that are integrated in depth and projected onto a 2D plane, not capturing the 3D structure of the sample. Early embryonic hearts, especially at looping stages, have a complicated, tubular geometry. Therefore, conventional OM cannot provide a full picture of the electrical conduction circumferentially around the heart, and may result in incomplete and inaccurate measurements. Here, we demonstrate OM of Hamburger and Hamilton stage 14 embryonic quail hearts using a new commercially-available VSD, Fluovolt, and depth sectioning using a custom built light-sheet microscopy system. Axial and lateral resolution of the system is 14µm and 8µm respectively. For OM imaging, the field-of-view was set to 900µm×900µm to cover the entire heart. 2D over time OM image sets at multiple cross-sections through the looping-stage heart were recorded. The shapes of both atrial and ventricular action potentials acquired were consistent with previous reports using conventional VSD (di-4-ANNEPS). With Fluovolt, signal-to-noise ratio (SNR) is improved significantly by a factor of 2-10 (compared with di-4-ANNEPS) enabling light-sheet OM, which intrinsically has lower SNR due to smaller sampling volumes. Electrophysiologic parameters are rate dependent. Optical pacing was successfully integrated into the system to ensure heart rate consistency. This will also enable accurately gated reconstruction of full four dimensional conduction maps and 3D conduction velocity measurements.

  8. Opportunity View on Sol 397 3-D

    NASA Image and Video Library

    2005-03-17

    On Feb. 19, 2005, NASA Mars Exploration Rover Opportunity had completed a drive of 124 meters 407 feet across the rippled flatland of the Meridiani Planum region. 3D glasses are necessary to view this image.

  9. Interior building details of Building D, Room DM5: mezzanine hallway, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior building details of Building D, Room D-M5: mezzanine hallway, intact historic asphalt surface flooring, full height partition wall with hoppers and east brick retaining wall with voids from the original veiling joist; southerly view - San Quentin State Prison, Building 22, Point San Quentin, San Quentin, Marin County, CA

  10. Human perception considerations for 3D content creation

    NASA Astrophysics Data System (ADS)

    Green, G. Almont

    2011-03-01

    Observation and interviews with people viewing autostereoscopic 3D imagery provides evidence that there are many human perception considerations required for 3D content creation. A study was undertaken whereby it was witnessed that certain test autostereoscopic imagery elicited a highly emotional response and engagement, while other test autostereoscopic imagery was given only a passing glance. That an image can be viewed with a certain level of stereopsis does not make it compelling. By taking into consideration the manner in which humans perceive depth and the space between objects, 3D content can achieve a level of familiarity and realness that is not possible with single perspective imagery. When human perception issues are ignored, 3D imagery can be undesirable to viewers and a negative bias against 3D imagery can occur. The preparation of 3D content is more important than the display technology. Where human perception, as it is used to interpret reality, is not mimicked in the creation of 3D content, the general public typically express a negative bias against that imagery (where choices are provided). For some, the viewing of 3D content that could not exist naturally, induces physical discomfort.

  11. Scanning 3D full human bodies using Kinects.

    PubMed

    Tong, Jing; Zhou, Jin; Liu, Ligang; Pan, Zhigeng; Yan, Hao

    2012-04-01

    Depth camera such as Microsoft Kinect, is much cheaper than conventional 3D scanning devices, and thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of extreme low quality. In this paper, we present a novel scanning system for capturing 3D full human body models by using multiple Kinects. To avoid the interference phenomena, we use two Kinects to capture the upper part and lower part of a human body respectively without overlapping region. A third Kinect is used to capture the middle part of the human body from the opposite direction. We propose a practical approach for registering the various body parts of different views under non-rigid deformation. First, a rough mesh template is constructed and used to deform successive frames pairwisely. Second, global alignment is performed to distribute errors in the deformation space, which can solve the loop closure problem efficiently. Misalignment caused by complex occlusion can also be handled reasonably by our global alignment algorithm. The experimental results have shown the efficiency and applicability of our system. Our system obtains impressive results in a few minutes with low price devices, thus is practically useful for generating personalized avatars for everyday users. Our system has been used for 3D human animation and virtual try on, and can further facilitate a range of home–oriented virtual reality (VR) applications.

  12. 3D Viewing: Odd Perception - Illusion? reality? or both?

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Iizasa, K.

    2008-12-01

    We live in the three dimensional space, don't we? It could be at least four dimensions, but that is another story. In either way our perceptual capability of 3D-Viewing is constrained by our 2D-perception (our intrinsic tools of perception). I carried out a few visual experiments using topographic data to show our intrinsic (or biological) disability (or shortcoming) in 3D-recognition of our world. Results of the experiments suggest: (1) 3D-surface model displayed on a 2D-computer screen (or paper) always has two interpretations of the 3D- surface geometry, if we choose one of the interpretation (in other word, if we are hooked by one perception of the two), we maintain its perception even if the 3D-model changes its viewing perspective in time shown on the screen, (2) more interesting is that 3D-real solid object (e.g.,made of clay) also gives above mentioned two interpretations of the geometry of the object, if we observe the object with one-eye. Most famous example of this viewing illusion is exemplified by a magician, who died in 2007, Jerry Andrus who made a super-cool paper crafted dragon which causes visual illusion to one-eyed viewer. I, by the experiments, confirmed this phenomenon in another perceptually persuasive (deceptive?) way. My conclusion is that this illusion is intrinsic, i.e. reality for human, because, even if we live in 3D-space, our perceptional tool (eyes) is composed of 2D sensors whose information is reconstructed or processed to 3D by our experience-based brain. So, (3) when we observe the 3D-surface-model on the computer screen, we are always one eye short even if we use both eyes. One last suggestion from my experiments is that recent highly sophisticated 3D- models might include too many information that human perceptions cannot handle properly, i.e. we might not be understanding the 3D world (geospace) at all, just illusioned.

  13. View-invariant gait recognition method by three-dimensional convolutional neural network

    NASA Astrophysics Data System (ADS)

    Xing, Weiwei; Li, Ying; Zhang, Shunli

    2018-01-01

    Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.

  14. 3D-shape recognition and size measurement of irregular rough particles using multi-views interferometric out-of-focus imaging.

    PubMed

    Ouldarbi, L; Talbi, M; Coëtmellec, S; Lebrun, D; Gréhan, G; Perret, G; Brunel, M

    2016-11-10

    We realize simplified-tomography experiments on irregular rough particles using interferometric out-of-focus imaging. Using two angles of view, we determine the global 3D-shape, the dimensions, and the 3D-orientation of irregular rough particles whose morphologies belong to families such as sticks, plates, and crosses.

  15. Effectiveness of Applying 2D Static Depictions and 3D Animations to Orthographic Views Learning in Graphical Course

    ERIC Educational Resources Information Center

    Wu, Chih-Fu; Chiang, Ming-Chin

    2013-01-01

    This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…

  16. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  17. 3-Dimensional shear wave elastography of breast lesions

    PubMed Central

    Chen, Ya-ling; Chang, Cai; Zeng, Wei; Wang, Fen; Chen, Jia-jian; Qu, Ning

    2016-01-01

    Abstract Color patterns of 3-dimensional (3D) shear wave elastography (SWE) is a promising method in differentiating tumoral nodules recently. This study was to evaluate the diagnostic accuracy of color patterns of 3D SWE in breast lesions, with special emphasis on coronal planes. A total of 198 consecutive women with 198 breast lesions (125 malignant and 73 benign) were included, who underwent conventional ultrasound (US), 3D B-mode, and 3D SWE before surgical excision. SWE color patterns of Views A (transverse), T (sagittal), and C (coronal) were determined. Sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were calculated. Distribution of SWE color patterns was significantly different between malignant and benign lesions (P = 0.001). In malignant lesions, “Stiff Rim” was significantly more frequent in View C (crater sign, 60.8%) than in View A (51.2%, P = 0.013) and View T (54.1%, P = 0.035). AUC for combination of “Crater Sign” and conventional US was significantly higher than View A (0.929 vs 0.902, P = 0.004) and View T (0.929 vs 0.907, P = 0.009), and specificity significantly increased (90.4% vs 78.1%, P = 0.013) without significant change in sensitivity (85.6% vs 88.0%, P = 0.664) as compared with conventional US. In conclusion, combination of conventional US with 3D SWE color patterns significantly increased diagnostic accuracy, with “Crater Sign” in coronal plane of the highest value. PMID:27684820

  18. Mars Odyssey Seen by Mars Global Surveyor 3-D

    NASA Image and Video Library

    2005-05-19

    This stereoscopic picture of NASA Mars Odyssey spacecraft was created from two views of that spacecraft taken by the Mars Orbiter Camera on NASA Mars Global Surveyor. 3D glasses are necessary to view this image.

  19. Evaluation of the relationship between mandibular third molar and mandibular canal by different algorithms of cone-beam computed tomography.

    PubMed

    Mehdizadeh, Mojdeh; Ahmadi, Navid; Jamshidi, Mahsa

    2014-11-01

    Exact location of the inferior alveolar nerve (IAN) bundle is very important. The aim of this study is to evaluate the relationship between the mandibular third molar and the mandibular canal by cone-beam computed tomography. This was a cross-sectional study with convenience sampling. 94 mandibular CBCTs performed with CSANEX 3D machine (Soredex, Finland) and 3D system chosen. Vertical and horizontal relationship between the mandibular canal and the third molar depicted by 3D, panoramic reformat view of CBCT and cross-sectional view. Cross-sectional view was our gold standard and other view evaluated by it. There were significant differences between the vertical and horizontal relation of nerve and tooth in all views (p < 0.001). The results showed differences in the position of the inferior alveolar nerve with different views of CBCT, so CBCT images are not quite reliable and have possibility of error.

  20. WPC Medium-Range Forecasts (Days 3-7)

    Science.gov Websites

    Pressures Day 7 [b/w] [full color] *The Northern Hemispheric view is updated once daily at 1900Z. EXTENDED Level Pressures and Fronts CONUS View* Final Day 3 Fronts and Pressures for the CONUS Day 3 [b/w] [full color] Final Day 4 Fronts and Pressures for the CONUS Day 4 [b/w] [full color] Final Day 5 Fronts and

  1. Glasses-free large size high-resolution three-dimensional display based on the projector array

    NASA Astrophysics Data System (ADS)

    Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong

    2014-11-01

    Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.

  2. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  3. Disparity profiles in 3DV applications: overcoming the issue of heterogeneous viewing conditions in stereoscopic delivery

    NASA Astrophysics Data System (ADS)

    Boisson, Guillaume; Chamaret, Christel

    2012-03-01

    More and more numerous 3D movies are released each year. Thanks to the current spread of 3D-TV displays, these 3D Video (3DV) contents are about to enter massively the homes. Yet viewing conditions determine the stereoscopic features achievable for 3DV material. Because the conditions at home - screen size and distance to screen - differ significantly from a theater, 3D Cinema movies need to be repurposed before broadcast and replication on 3D Blu-ray Discs for being fully enjoyed at home. In that paper we tackle that particular issue of how to handle the variety of viewing conditions in stereoscopic contents delivery. To that extend we first investigate what is basically at stake for granting stereoscopic viewers' comfort, through the well-known - and sometimes dispraised - vergence-accommodation conflict. Thereby we define a set of basic rules that can serve as guidelines for 3DV creation. We propose disparity profiles as new requirements for 3DV production and repurposing. Meeting proposed background and foreground constraints prevents from visual fatigue, and occupying the whole depth budget available grants optimal 3D effects. We present an efficient algorithm for automatic disparity-based 3DV retargeting depending on the viewing conditions. Variants are proposed depending on the input format (stereoscopic binocular content or depth-based format) and the level of complexity achievable.

  4. 3D Spatial and Spectral Fusion of Terrestrial Hyperspectral Imagery and Lidar for Hyperspectral Image Shadow Restoration Applied to a Geologic Outcrop

    NASA Astrophysics Data System (ADS)

    Hartzell, P. J.; Glennie, C. L.; Hauser, D. L.; Okyay, U.; Khan, S.; Finnegan, D. C.

    2016-12-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from an exclusively airborne technique to terrestrial modalities. This enables high resolution 3D spatial and spectral quantification of vertical geologic structures for applications such as virtual 3D rock outcrop models for hydrocarbon reservoir analog analysis and mineral quantification in open pit mining environments. In contrast to airborne observation geometry, the vertical surfaces observed by horizontal-viewing terrestrial HSI sensors are prone to extensive topography-induced solar shadowing, which leads to reduced pixel classification accuracy or outright removal of shadowed pixels from analysis tasks. Using a precisely calibrated and registered offset cylindrical linear array camera model, we demonstrate the use of 3D lidar data for sub-pixel HSI shadow detection and the restoration of the shadowed pixel spectra via empirical methods that utilize illuminated and shadowed pixels of similar material composition. We further introduce a new HSI shadow restoration technique that leverages collocated backscattered lidar intensity, which is resistant to solar conditions, obtained by projecting the 3D lidar points through the HSI camera model into HSI pixel space. Using ratios derived from the overlapping lidar laser and HSI wavelengths, restored shadow pixel spectra are approximated using a simple scale factor. Simulations of multiple lidar wavelengths, i.e., multi-spectral lidar, indicate the potential for robust HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance is quantified through HSI pixel classification consistency between full sun and partial sun exposures of a single geologic outcrop.

  5. Java Radar Analysis Tool

    NASA Technical Reports Server (NTRS)

    Zaczek, Mariusz P.

    2005-01-01

    Java Radar Analysis Tool (JRAT) is a computer program for analyzing two-dimensional (2D) scatter plots derived from radar returns showing pieces of the disintegrating Space Shuttle Columbia. JRAT can also be applied to similar plots representing radar returns showing aviation accidents, and to scatter plots in general. The 2D scatter plots include overhead map views and side altitude views. The superposition of points in these views makes searching difficult. JRAT enables three-dimensional (3D) viewing: by use of a mouse and keyboard, the user can rotate to any desired viewing angle. The 3D view can include overlaid trajectories and search footprints to enhance situational awareness in searching for pieces. JRAT also enables playback: time-tagged radar-return data can be displayed in time order and an animated 3D model can be moved through the scene to show the locations of the Columbia (or other vehicle) at the times of the corresponding radar events. The combination of overlays and playback enables the user to correlate a radar return with a position of the vehicle to determine whether the return is valid. JRAT can optionally filter single radar returns, enabling the user to selectively hide or highlight a desired radar return.

  6. Higher Dimensional Spacetimes for Visualizing and Modeling Subluminal, Luminal and Superluminal Flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froning, H. David; Meholic, Gregory V.

    2010-01-28

    This paper briefly explores higher dimensional spacetimes that extend Meholic's visualizable, fluidic views of: subluminal-luminal-superluminal flight; gravity, inertia, light quanta, and electromagnetism from 2-D to 3-D representations. Although 3-D representations have the potential to better model features of Meholic's most fundamental entities (Transluminal Energy Quantum) and of the zero-point quantum vacuum that pervades all space, the more complex 3-D representations loose some of the clarity of Meholic's 2-D representations of subluminal and superlumimal realms. So, much new work would be needed to replace Meholic's 2-D views of reality with 3-D ones.

  7. Spirit Lookout Panorama in 3-D

    NASA Image and Video Library

    2005-05-23

    This is a stereoscopic version of NASA Mars Exploration Rover Spirit Lookout panorama, acquired on Feb. 27 to Mar. 2, 2005. The view is from a position known informally as Larry Lookout. 3D glasses are necessary to view this image.

  8. 7. VIEW TO NORTH. FROM WEST PLATFORM. SAME AS IL1D3, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VIEW TO NORTH. FROM WEST PLATFORM. SAME AS IL-1D-3, AFTER TRAIN HAS DEPARTED EAST PLATFORM. - Union Elevated Railroad, Randolph-Wabash Avenue Station, Randolph Street & Wabash Avenue, Chicago, Cook County, IL

  9. Development of scanning holographic display using MEMS SLM

    NASA Astrophysics Data System (ADS)

    Takaki, Yasuhiro

    2016-10-01

    Holography is an ideal three-dimensional (3D) display technique, because it produces 3D images that naturally satisfy human 3D perception including physiological and psychological factors. However, its electronic implementation is quite challenging because ultra-high resolution is required for display devices to provide sufficient screen size and viewing zone. We have developed holographic display techniques to enlarge the screen size and the viewing zone by use of microelectromechanical systems spatial light modulators (MEMS-SLMs). Because MEMS-SLMs can generate hologram patterns at a high frame rate, the time-multiplexing technique is utilized to virtually increase the resolution. Three kinds of scanning systems have been combined with MEMS-SLMs; the screen scanning system, the viewing-zone scanning system, and the 360-degree scanning system. The screen scanning system reduces the hologram size to enlarge the viewing zone and the reduced hologram patterns are scanned on the screen to increase the screen size: the color display system with a screen size of 6.2 in. and a viewing zone angle of 11° was demonstrated. The viewing-zone scanning system increases the screen size and the reduced viewing zone is scanned to enlarge the viewing zone: a screen size of 2.0 in. and a viewing zone angle of 40° were achieved. The two-channel system increased the screen size to 7.4 in. The 360-degree scanning increases the screen size and the reduced viewing zone is scanned circularly: the display system having a flat screen with a diameter of 100 mm was demonstrated, which generates 3D images viewed from any direction around the flat screen.

  10. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  11. Rapid 3D Refractive‐Index Imaging of Live Cells in Suspension without Labeling Using Dielectrophoretic Cell Rotation

    PubMed Central

    Habaza, Mor; Kirschbaum, Michael; Guernth‐Marschner, Christian; Dardikman, Gili; Barnea, Itay; Korenstein, Rafi; Duschl, Claus

    2016-01-01

    A major challenge in the field of optical imaging of live cells is achieving rapid, 3D, and noninvasive imaging of isolated cells without labeling. If successful, many clinical procedures involving analysis and sorting of cells drawn from body fluids, including blood, can be significantly improved. A new label‐free tomographic interferometry approach is presented. This approach provides rapid capturing of the 3D refractive‐index distribution of single cells in suspension. The cells flow in a microfluidic channel, are trapped, and then rapidly rotated by dielectrophoretic forces in a noninvasive and precise manner. Interferometric projections of the rotated cell are acquired and processed into the cellular 3D refractive‐index map. Uniquely, this approach provides full (360°) coverage of the rotation angular range around any axis, and knowledge on the viewing angle. The experimental demonstrations presented include 3D, label‐free imaging of cancer cells and three types of white blood cells. This approach is expected to be useful for label‐free cell sorting, as well as for detection and monitoring of pathological conditions resulting in cellular morphology changes or occurrence of specific cell types in blood or other body fluids. PMID:28251046

  12. [Evaluation of echocardiographic left ventricular wall motion analysis supported by internet picture viewing system].

    PubMed

    Hirano, Yutaka; Ikuta, Shin-Ichiro; Nakano, Manabu; Akiyama, Seita; Nakamura, Hajime; Nasu, Masataka; Saito, Futoshi; Nakagawa, Junichi; Matsuzaki, Masashi; Miyazaki, Shunichi

    2007-02-01

    Assessment of deterioration of regional wall motion by echocardiography is not only subjective but also features difficulties with interobserver agreement. Progress in digital communication technology has made it possible to send video images from a distant location via the Internet. The possibility of evaluating left ventricular wall motion using video images sent via the Internet to distant institutions was evaluated. Twenty-two subjects were randomly selected. Four sets of video images (parasternal long-axis view, parasternal short-axis view, apical four-chamber view, and apical two-chamber view) were taken for one cardiac cycle. The images were sent via the Internet to two institutions (observer C in facility A and observers D and E in facility B) for evaluation. Great care was taken to prevent disclosure of patient information to these observers. Parasternal long-axis images were divided into four segments, and the parasternal short-axis view, apical four-chamber view, and apical two-chamber view were divided into six segments. One of the following assessments, normokinesis, hypokinesis, akinesis, or dyskinesis, was assigned to each segment. The interobserver rates of agreement in judgments between observers C and D, observers C and E, and intraobserver agreement rate (for observer D) were calculated. The rate of interobserver agreement was 85.7% (394/460 segments; Kappa = 0.65) between observers C and D, 76.7% (353/460 segments; Kappa = 0.39) between observers D and E, and 76.3% (351/460 segments; Kappa = 0.36)between observers C and E, and intraobserver agreement was 94.3% (434/460; Kappa = 0.86). Segments of difference judgments between observers C and D were normokinesis-hypokinesis; 62.1%, hypokinesis-akinesis; 33.3%, akinesis-dyskinesis; 3.0%, and normokinesis-akinesis; 1.5%. Wall motion can be evaluated at remote institutions via the Internet.

  13. Real-time free-viewpoint DIBR for large-size 3DLED

    NASA Astrophysics Data System (ADS)

    Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru

    2017-10-01

    Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.

  14. The impact of crosstalk on three-dimensional laparoscopic performance and workload.

    PubMed

    Sakata, Shinichiro; Grove, Philip M; Watson, Marcus O; Stevenson, Andrew R L

    2017-10-01

    This is the first study to explore the effects of crosstalk from 3D laparoscopic displays on technical performance and workload. We studied crosstalk at magnitudes that may have been tolerated during laparoscopic surgery. Participants were 36 voluntary doctors. To minimize floor effects, participants completed their surgery rotations, and a laparoscopic suturing course for surgical trainees. We used a counterbalanced, within-subjects design in which participants were randomly assigned to complete laparoscopic tasks in one of six unique testing sequences. In a simulation laboratory, participants were randomly assigned to complete laparoscopic 'navigation in space' and suturing tasks in three viewing conditions: 2D, 3D without ghosting and 3D with ghosting. Participants calibrated their exposure to crosstalk as the maximum level of ghosting that they could tolerate without discomfort. The Randot® Stereotest was used to verify stereoacuity. The study performance metric was time to completion. The NASA TLX was used to measure workload. Normal threshold stereoacuity (40-20 second of arc) was verified in all participants. Comparing optimal 3D with 2D viewing conditions, mean performance times were 2.8 and 1.6 times faster in laparoscopic navigation in space and suturing tasks respectively (p< .001). Comparing optimal 3D with suboptimal 3D viewing conditions, mean performance times were 2.9 times faster in both tasks (p< .001). Mean workload in 2D was 1.5 and 1.3 times greater than in optimal 3D viewing, for navigation in space and suturing tasks respectively (p< .001). Mean workload associated with suboptimal 3D was 1.3 times greater than optimal 3D in both laparoscopic tasks (p< .001). There was no significant relationship between the magnitude of ghosting score, laparoscopic performance and workload. Our findings highlight the advantages of 3D displays when used optimally, and their shortcomings when used sub-optimally, on both laparoscopic performance and workload.

  15. Aberration analyses for improving the frontal projection three-dimensional display.

    PubMed

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Wang, Peng; Cao, Xuemei; Sun, Lei; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua

    2014-09-22

    The crosstalk severely affects the viewing experience for the auto-stereoscopic 3D displays based on frontal projection lenticular sheet. To suppress unclear stereo vision and ghosts are observed in marginal viewing zones(MVZs), aberration of the lenticular sheet combining with the frontal projector is analyzed and designed. Theoretical and experimental results show that increasing radius of curvature (ROC) or decreasing aperture of the lenticular sheet can suppress the aberration and reduce the crosstalk. A projector array with 20 micro-projectors is used to frontally project 20 parallax images one lenticular sheet with the ROC of 10 mm and the size of 1.9 m × 1.2 m. The 3D image with the high quality is experimentally demonstrated in both the mid-viewing zone and MVZs in the optimal viewing plane. The 3D clear depth of 1.2m can be perceived. To provide an excellent 3D image and enlarge the field of view at the same time, a novel structure of lenticular sheet is presented to reduce aberration, and the crosstalk is well suppressed.

  16. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  17. Photoacoustic endoscopy probe using a coherent fibre-optic bundle and Fabry-Pérot ultrasound sensor (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ansari, Rehman; Beard, Paul C.; Zhang, Edward Z.; Desjardins, Adrien E.

    2016-03-01

    There is considerable interest in the development of photoacoustic endoscopy (PAE) probes for the clinical assessment of pathologies in the gastrointestinal (GI) tract, guiding minimally invasive laparoscopic surgeries and applications in foetal medicine. However, most previous PAE probes integrate mechanical scanners and piezoelectric transducers at the distal end which can be technically complex, expensive and pose challenges in achieving the necessary level of miniaturisation. We present two novel all-optical forward-viewing endoscopic probes operating in widefield tomography mode that have the potential to overcome these limitations. In one configuration, the probe comprises a transparent 40 MHz Fabry-Pérot ultrasound sensor deposited at the tip of a rigid, 3 mm diameter coherent fibre-optic bundle. In this way, the distal end of coherent fibre bundle acts as a 2D array of wideband ultrasound detectors. In another configuration, an optical relay is used between the distal end face of flexible fibre bundle and the Fabry-Pérot sensor to enlarge the lateral field of view to 6 mm x 6 mm. In both configurations, the pulsed excitation laser beam is full-field coupled into the fibre bundle at the proximal end for uniform backward-mode illumination of the tissue at the probe tip. In order to record the photoacoustic waves arriving at the probe tip, the proximal end of the fibre bundle is optically scanned in 2D with a CW wavelength-tunable interrogation laser beam thereby interrogating different spatial points on the sensor. A time-reversal image reconstruction algorithm was used to reconstruct a 3D image from the detected signals. The 3D field of view of the flexible PAE probe is 6 mm x 6 mm x 6 mm and the axial and lateral spatial resolution is 30 µm and 90 µm, respectively. 3D imaging capability is demonstrated using tissue phantoms, ex vivo tissues and in vivo. To the best of our knowledge, this is the first forward-viewing implementation of a photoacoustic endoscopy probe, and it offers several advantages over previous distal-end scanning probes. These include a high degree of miniaturisation, no moving parts at the distal end and simple and inexpensive fabrication with the potential to realise disposable probes for clinical imaging of the GI tract and other minimally invasive applications.

  18. Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1985-01-01

    A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.

  19. Integrating Instrumental Data Provides the Full Science in 3D

    NASA Astrophysics Data System (ADS)

    Turrin, M.; Boghosian, A.; Bell, R. E.; Frearson, N.

    2017-12-01

    Looking at data sparks questions, discussion and insights. By integrating multiple data sets we deepen our understanding of how cryosphere processes operate. Field collected data provide measurements from multiple instruments supporting rapid insights. Icepod provides a platform focused on the integration of multiple instruments. Over the last three seasons, the ROSETTA-Ice project has deployed Icepod to comprehensively map the Ross Ice Shelf, Antarctica. This integrative data collection along with new methods of data visualization allows us to answer questions about ice shelf structure and evolution that arise during data processing and review. While data are vetted and archived in the field to confirm instruments are operating, upon return to the lab data are again reviewed for accuracy before full analysis. Recent review of shallow ice radar data from the Beardmore Glacier, an outlet glacier into the Ross Ice Shelf, presented an abrupt discontinuity in the ice surface. This sharp 8m surface elevation drop was originally interpreted as a processing error. Data were reexamined, integrating the simultaneously collected shallow and deep ice radar with lidar data. All the data sources showed the surface discontinuity, confirming the abrupt 8m drop in surface elevation. Examining high resolution WorldView satellite imagery revealed a persistent source for these elevation drops. The satellite imagery showed that this tear in the ice surface was only one piece of a larger pattern of "chatter marks" in ice that flows at a rate of 300 m/yr. The markings are buried over a distance of 30 km or after 100 years of travel down Beardmore Glacier towards the front of the Ross Ice Shelf. Using Icepod's lidar and cameras we map this chatter mark feature in 3D to reveal its full structure. We use digital elevation models from WorldView to map the other along flow chatter marks. In order to investigate the relationship between these surface features and basal crevasses, the deep ice radar enables a 3D model of the base of the ice shelf. Both the high resolution imagery and radar echograms along with a VR experience of our 3D models, allows viewers to fully explore the dataset and gain insight into the processes producing these features.

  20. Effective count rates for PET scanners with reduced and extended axial field of view

    NASA Astrophysics Data System (ADS)

    MacDonald, L. R.; Harrison, R. L.; Alessio, A. M.; Hunter, W. C. J.; Lewellen, T. K.; Kinahan, P. E.

    2011-06-01

    We investigated the relationship between noise equivalent count (NEC) and axial field of view (AFOV) for PET scanners with AFOVs ranging from one-half to twice those of current clinical scanners. PET scanners with longer or shorter AFOVs could fulfill different clinical needs depending on exam volumes and site economics. Using previously validated Monte Carlo simulations, we modeled true, scattered and random coincidence counting rates for a PET ring diameter of 88 cm with 2, 4, 6, and 8 rings of detector blocks (AFOV 7.8, 15.5, 23.3, and 31.0 cm). Fully 3D acquisition mode was compared to full collimation (2D) and partial collimation (2.5D) modes. Counting rates were estimated for a 200 cm long version of the 20 cm diameter NEMA count-rate phantom and for an anthropomorphic object based on a patient scan. We estimated the live-time characteristics of the scanner from measured count-rate data and applied that estimate to the simulated results to obtain NEC as a function of object activity. We found NEC increased as a quadratic function of AFOV for 3D mode, and linearly in 2D mode. Partial collimation provided the highest overall NEC on the 2-block system and fully 3D mode provided the highest NEC on the 8-block system for clinically relevant activities. On the 4-, and 6-block systems 3D mode NEC was highest up to ~300 MBq in the anthropomorphic phantom, above which 3D NEC dropped rapidly, and 2.5D NEC was highest. Projected total scan time to achieve NEC-density that matches current clinical practice in a typical oncology exam averaged 9, 15, 24, and 61 min for the 8-, 6-, 4-, and 2-block ring systems, when using optimal collimation. Increasing the AFOV should provide a greater than proportional increase in NEC, potentially benefiting patient throughput-to-cost ratio. Conversely, by using appropriate collimation, a two-ring (7.8 cm AFOV) system could acquire whole-body scans achieving NEC-density levels comparable to current standards within long, but feasible, scan times.

  1. Sci-Vis Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur Bleeker, PNNL

    2015-03-11

    SVF is a full featured OpenGL 3d framework that allows for rapid creation of complex visualizations. The SVF framework handles much of the lifecycle and complex tasks required for a 3d visualization. Unlike a game framework SVF was designed to use fewer resources, work well in a windowed environment, and only render when necessary. The scene also takes advantage of multiple threads to free up the UI thread as much as possible. Shapes (actors) in the scene are created by adding or removing functionality (through support objects) during runtime. This allows a highly flexible and dynamic means of creating highlymore » complex actors without the code complexity (it also helps overcome the lack of multiple inheritance in Java.) All classes are highly customizable and there are abstract classes which are intended to be subclassed to allow a developer to create more complex and highly performant actors. There are multiple demos included in the framework to help the developer get started and shows off nearly all of the functionality. Some simple shapes (actors) are already created for you such as text, bordered text, radial text, text area, complex paths, NURBS paths, cube, disk, grid, plane, geometric shapes, and volumetric area. It also comes with various camera types for viewing that can be dragged, zoomed, and rotated. Picking or selecting items in the scene can be accomplished in various ways depending on your needs (raycasting or color picking.) The framework currently has functionality for tooltips, animation, actor pools, color gradients, 2d physics, text, 1d/2d/3d textures, children, blending, clipping planes, view frustum culling, custom shaders, and custom actor states« less

  2. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  3. Dual-view-zone tabletop 3D display system based on integral imaging.

    PubMed

    He, Min-Yang; Zhang, Han-Le; Deng, Huan; Li, Xiao-Wei; Li, Da-Hai; Wang, Qiong-Hua

    2018-02-01

    In this paper, we propose a dual-view-zone tabletop 3D display system based on integral imaging by using a multiplexed holographic optical element (MHOE) that has the optical properties of two sets of microlens arrays. The MHOE is recorded by a reference beam using the single-exposure method. The reference beam records the wavefronts of a microlens array from two different directions. Thus, when the display beam is projected on the MHOE, two wavefronts with the different directions will be rebuilt and the 3D virtual images can be reconstructed in two viewing zones. The MHOE has angle and wavelength selectivity. Under the conditions of the matched wavelength and the angle of the display beam, the diffraction efficiency of the MHOE is greatest. Because the unmatched light just passes through the MHOE, the MHOE has the advantage of a see-through display. The experimental results confirm the feasibility of the dual-view-zone tabletop 3D display system.

  4. 3D cinema to 3DTV content adaptation

    NASA Astrophysics Data System (ADS)

    Yasakethu, L.; Blondé, L.; Doyen, D.; Huynh-Thu, Q.

    2012-03-01

    3D cinema and 3DTV have grown in popularity in recent years. Filmmakers have a significant opportunity in front of them given the recent success of 3D films. In this paper we investigate whether this opportunity could be extended to the home in a meaningful way. "3D" perceived from viewing stereoscopic content depends on the viewing geometry. This implies that the stereoscopic-3D content should be captured for a specific viewing geometry in order to provide a satisfactory 3D experience. However, although it would be possible, it is clearly not viable, to produce and transmit multiple streams of the same content for different screen sizes. In this study to solve the above problem, we analyze the performance of six different disparity-based transformation techniques, which could be used for cinema-to-3DTV content conversion. Subjective tests are performed to evaluate the effectiveness of the algorithms in terms of depth effect, visual comfort and overall 3D quality. The resultant 3DTV experience is also compared to that of cinema. We show that by applying the proper transformation technique on the content originally captured for cinema, it is possible to enhance the 3DTV experience. The selection of the appropriate transformation is highly dependent on the content characteristics.

  5. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server.

    PubMed

    Cannone, Jamie J; Sweeney, Blake A; Petrov, Anton I; Gutell, Robin R; Zirbel, Craig L; Leontis, Neocles

    2015-07-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.

    PubMed

    Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J

    2015-01-01

    GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.

  7. Viewpoint dependence in the recognition of non-elongated familiar objects: testing the effects of symmetry, front-back axis, and familiarity.

    PubMed

    Niimi, Ryosuke; Yokosawa, Kazuhiko

    2009-01-01

    Visual recognition of three-dimensional (3-D) objects is relatively impaired for some particular views, called accidental views. For most familiar objects, the front and top views are considered to be accidental views. Previous studies have shown that foreshortening of the axes of elongation of objects in these views impairs recognition, but the influence of other possible factors is largely unknown. Using familiar objects without a salient axis of elongation, we found that a foreshortened symmetry plane of the object and low familiarity of the viewpoint accounted for the relatively worse recognition for front views and top views, independently of the effect of a foreshortened axis of elongation. We found no evidence that foreshortened front-back axes impaired recognition in front views. These results suggest that the viewpoint dependence of familiar object recognition is not a unitary phenomenon. The possible role of symmetry (either 2-D or 3-D) in familiar object recognition is also discussed.

  8. High-resolution velocity measurements using dual-view tomographic digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Agarwal, Karuna; Katz, Joseph

    2017-11-01

    A recently developed two-view tomographic digital holographic microscopy (DHM) system is used for measuring the flow around a pair of cubes with height of 90 wall units immersed in the inner layer of a turbulent channel flow at Reτ = 2500. Matching of the two views at 1- μm precision is achieved by implementing a self-calibration procedure that determines the three-dimensional, three-component (3D3C) distortion function, which corrects the geometric mapping. The procedure has been tested using distorted synthetic particle fields, and then implemented on experimental data. The two views are used to overcome the reduced accuracy of DHM in the axial direction of the reference beam due to elongation of the reconstructed traces. Multiplying the two precisely-matched 3D intensity fields is used for truncating the elongated traces. The velocity distributions are obtained by 3D particle tracking guided by 3D cross-correlation of the truncated intensity fields along with other size/shape/smoothness constraints. As demonstrated by how divergence-free the data is, the resulting 3D3C velocity field is substantially more accurate than results obtained from single-view DHM. Results show that the cube is surrounded by a vorticity ``canopy'' that extends from upstream of its front surface to the separated region in its near wake. Nearly axial necklace vortices remain confined to the near wall region between the cubes, but expand rapidly behind them. Funded by NSF and ONR.

  9. Dual-view integral imaging three-dimensional display using polarized glasses.

    PubMed

    Wu, Fei; Lv, Guo-Jiao; Deng, Huan; Zhao, Bai-Chuan; Wang, Qiong-Hua

    2018-02-20

    We propose a dual-view integral imaging (DVII) three-dimensional (3D) display using polarized glasses. The DVII 3D display consists of a display panel, a polarized parallax barrier, a microlens array, and two pairs of polarized glasses. Two kinds of elemental images, which are captured from two different 3D scenes, are alternately arranged on the display panel. The polarized parallax barrier is attached to the display panel and composed of two kinds of units that are also alternately arranged. The polarization directions between adjacent units are perpendicular. The polarization directions of the two pairs of polarized glasses are the same as those of the two kinds of units of the polarized parallax barrier, respectively. The lights emitted from the two kinds of elemental images are modulated by the corresponding polarizer units and microlenses, respectively. Two different 3D images are reconstructed in the viewing zone and separated by using two pairs of polarized glasses. A prototype of the DVII 3D display is developed and two 3D images can be presented simultaneously, verifying the hypothesis.

  10. Regular three-dimensional presentations improve in the identification of surgical liver anatomy - a randomized study.

    PubMed

    Müller-Stich, Beat P; Löb, Nicole; Wald, Diana; Bruckner, Thomas; Meinzer, Hans-Peter; Kadmon, Martina; Büchler, Markus W; Fischer, Lars

    2013-09-25

    Three-dimensional (3D) presentations enhance the understanding of complex anatomical structures. However, it has been shown that two dimensional (2D) "key views" of anatomical structures may suffice in order to improve spatial understanding. The impact of real 3D images (3Dr) visible only with 3D glasses has not been examined yet. Contrary to 3Dr, regular 3D images apply techniques such as shadows and different grades of transparency to create the impression of 3D.This randomized study aimed to define the impact of both the addition of key views to CT images (2D+) and the use of 3Dr on the identification of liver anatomy in comparison with regular 3D presentations (3D). A computer-based teaching module (TM) was used. Medical students were randomized to three groups (2D+ or 3Dr or 3D) and asked to answer 11 anatomical questions and 4 evaluative questions. Both 3D groups had animated models of the human liver available to them which could be moved in all directions. 156 medical students (57.7% female) participated in this randomized trial. Students exposed to 3Dr and 3D performed significantly better than those exposed to 2D+ (p < 0.01, ANOVA). There were no significant differences between 3D and 3Dr and no significant gender differences (p > 0.1, t-test). Students randomized to 3D and 3Dr not only had significantly better results, but they also were significantly faster in answering the 11 anatomical questions when compared to students randomized to 2D+ (p < 0.03, ANOVA). Whether or not "key views" were used had no significant impact on the number of correct answers (p > 0.3, t-test). This randomized trial confirms that regular 3D visualization improve the identification of liver anatomy.

  11. Perspective View with Landsat Overlay, Salt Lake City Olympics Venues, Utah

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The 2002 Winter Olympics are hosted by Salt Lake City at several venues within the city, in nearby cities, and within the adjacent Wasatch Mountains. This computer generated perspective image provides a northward looking 'view from space' that includes all of these Olympic sites. In the south, next to Utah Lake, Provo hosts the ice hockey competition. In the north, northeast of the Great Salt Lake, Ogden hosts curling, and the nearby Snow Basin ski area hosts the downhill events. In between, southeast of the Great Salt Lake, Salt Lake City hosts the Olympic Village and the various skating events. Further east, across the Wasatch Mountains, the Park City area ski resorts host the bobsled, ski jumping, and snowboarding events. The Winter Olympics are always hosted in mountainous terrain. This view shows the dramatic landscape that makes the Salt Lake City region a world-class center for winter sports.

    This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and a Landsat 5 satellite image mosaic. Topographic expression is exaggerated four times.

    For a full-resolution, annotated version of this image, please select Figure 1, below: [figure removed for brevity, see original site]

    Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter(approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: View width 48.8 kilometers (30.2 miles), View distance 177 kilometers (110 miles) Location: 41 deg. North lat., 112.0 deg. West lon. Orientation: View North, 20 degrees below horizontal Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters (98 feet) Date Acquired: February 2000 (SRTM), 1990s (Landsat 5 image mosaic)

  12. Applications of Hard X-ray Full-Field Transmission X-ray Microscopy at SSRL

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Andrews, J. C.; Meirer, F.; Mehta, A.; Gil, S. Carrasco; Sciau, P.; Mester, Z.; Pianetta, P.

    2011-09-01

    State-of-the-art hard x-ray full-field transmission x-ray microscopy (TXM) at beamline 6-2C of Stanford Synchrotron Radiation Lightsource has been applied to various research fields including biological, environmental, and material studies. With the capability of imaging a 32-micron field-of-view at 30-nm resolution using both absorption mode and Zernike phase contrast, the 3D morphology of yeast cells grown in gold-rich media was investigated. Quantitative evaluation of the absorption coefficient was performed for mercury nanoparticles in alfalfa roots exposed to mercury. Combining XANES and TXM, we also performed XANES-imaging on an ancient pottery sample from the Roman pottery workshop at LaGraufesenque (Aveyron).

  13. Visual discomfort while watching stereoscopic three-dimensional movies at the cinema.

    PubMed

    Zeri, Fabrizio; Livi, Stefano

    2015-05-01

    This study investigates discomfort symptoms while watching Stereoscopic three-dimensional (S3D) movies in the 'real' condition of a cinema. In particular, it had two main objectives: to evaluate the presence and nature of visual discomfort while watching S3D movies, and to compare visual symptoms during S3D and 2D viewing. Cinema spectators of S3D or 2D films were interviewed by questionnaire at the theatre exit of different multiplex cinemas immediately after viewing a movie. A total of 854 subjects were interviewed (mean age 23.7 ± 10.9 years; range 8-81 years; 392 females and 462 males). Five hundred and ninety-nine of them viewed different S3D movies, and 255 subjects viewed a 2D version of a film seen in S3D by 251 subjects from the S3D group for a between-subjects design for that comparison. Exploratory factor analysis revealed two factors underlying symptoms: External Symptoms Factors (ESF) with a mean ± S.D. symptom score of 1.51 ± 0.58 comprised of eye burning, eye ache, eye strain, eye irritation and tearing; and Internal Symptoms Factors (ISF) with a mean ± S.D. symptom score of 1.38 ± 0.51 comprised of blur, double vision, headache, dizziness and nausea. ISF and ESF were significantly correlated (Spearman r = 0.55; p = 0.001) but with external symptoms significantly higher than internal ones (Wilcoxon Signed-ranks test; p = 0.001). The age of participants did not significantly affect symptoms. However, females had higher scores than males for both ESF and ISF, and myopes had higher ISF scores than hyperopes. Newly released movies provided lower ESF scores than older movies, while the seat position of spectators had minimal effect. Symptoms while viewing S3D movies were significantly and negatively correlated to the duration of wearing S3D glasses. Kruskal-Wallis results showed that symptoms were significantly greater for S3D compared to those of 2D movies, both for ISF (p = 0.001) and for ESF (p = 0.001). In short, the analysis of the symptoms experienced by S3D movie spectators based on retrospective visual comfort assessments, showed a higher level of external symptoms (eye burning, eye ache, tearing, etc.) when compared to the internal ones that are typically more perceptual (blurred vision, double vision, headache, etc.). Furthermore, spectators of S3D movies reported statistically higher symptoms when compared to 2D spectators. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  14. Structures Within the South Polar Cap of Mars from Three-dimensional Radar Imaging

    NASA Astrophysics Data System (ADS)

    Putzig, N. E.; Foss, F. J., II; Campbell, B. A.; Phillips, R. J.; Smith, I. B.

    2016-12-01

    We used Shallow Radar (SHARAD) observations on 2093 orbital passes by the Mars Reconnaissance Orbiter over Planum Australe to construct a 3-D data volume encompassing the entirety of the Martian south polar layered deposits (SPLD) and their surroundings. Efforts are underway to apply 3-D migration processing, an imaging process that will correct off-nadir returns (clutter) and properly position internal structures while improving the overall signal-to-noise ratio (SNR). Clutter mitigation and the structural corrections that migration provides have been particularly effective for a 3-D SHARAD volume over Planum Boreum, notably supporting the mapping of a shallow unconformity linked to the most recent retreat of mid-latitude glaciation (Smith et al., 2016, Science 352) and revealing what appear to be impact craters fully buried within the ice (Putzig et al., 2015, AGU Fall Meeting, Abs. P53G-05). In the preliminary Planum Australe volume, many crater-like structures are also present, adding to the evidence from surface age dating that the SPLD may be an order of magnitude or more older that the 4-Ma-old north polar layered deposits. Migration processing will sharpen this view, and the expected improvement in SNR is likely to reveal structures that are missing or very faint in single-orbit 2-D profiles, such as the deeper sequences within the layered deposits that are often obfuscated by shallow or internal scattering. The clarified views of the polar-cap interiors emerging from each SHARAD 3-D volume advance our ability to map out the interior structures and infer the history of their emplacement. A full assessment of likely buried craters may provide a means to date the deposits that is independent of climate models and goes beyond estimating a surface age. Achieving these objectives would be a major advancement toward the overarching goal of linking the geologic history of the polar layered deposits to climate processes and their history. Figure provides a cut-away view into the SHARAD 3-D volume over Planum Australe looking toward 315°E and showing radar return power (blue high, red low). No migration processing has been applied. The circular no-data zone above 87°S is 310 km across and the vertical dimension shows 17 µs of delay time ( 2.5 km).

  15. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  16. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  17. Quantum search algorithms on a regular lattice

    NASA Astrophysics Data System (ADS)

    Hein, Birgit; Tanner, Gregor

    2010-07-01

    Quantum algorithms for searching for one or more marked items on a d-dimensional lattice provide an extension of Grover’s search algorithm including a spatial component. We demonstrate that these lattice search algorithms can be viewed in terms of the level dynamics near an avoided crossing of a one-parameter family of quantum random walks. We give approximations for both the level splitting at the avoided crossing and the effectively two-dimensional subspace of the full Hilbert space spanning the level crossing. This makes it possible to give the leading order behavior for the search time and the localization probability in the limit of large lattice size including the leading order coefficients. For d=2 and d=3, these coefficients are calculated explicitly. Closed form expressions are given for higher dimensions.

  18. Omnidirectional-view three-dimensional display system based on cylindrical selective-diffusing screen.

    PubMed

    Xia, Xinxing; Zheng, Zhenrong; Liu, Xu; Li, Haifeng; Yan, Caijie

    2010-09-10

    We utilized a high-frame-rate projector, a rotating mirror, and a cylindrical selective-diffusing screen to present a novel three-dimensional (3D) omnidirectional-view display system without the need for any special viewing aids. The display principle and image size are analyzed, and the common display zone is proposed. The viewing zone for one observation place is also studied. The experimental results verify this method, and a vivid color 3D scene with occlusion and smooth parallax is also demonstrated with the system.

  19. Easy and Fast Reconstruction of a 3D Avatar with an RGB-D Sensor.

    PubMed

    Mao, Aihua; Zhang, Hong; Liu, Yuxin; Zheng, Yinglong; Li, Guiqing; Han, Guoqiang

    2017-05-12

    This paper proposes a new easy and fast 3D avatar reconstruction method using an RGB-D sensor. Users can easily implement human body scanning and modeling just with a personal computer and a single RGB-D sensor such as a Microsoft Kinect within a small workspace in their home or office. To make the reconstruction of 3D avatars easy and fast, a new data capture strategy is proposed for efficient human body scanning, which captures only 18 frames from six views with a close scanning distance to fully cover the body; meanwhile, efficient alignment algorithms are presented to locally align the data frames in the single view and then globally align them in multi-views based on pairwise correspondence. In this method, we do not adopt shape priors or subdivision tools to synthesize the model, which helps to reduce modeling complexity. Experimental results indicate that this method can obtain accurate reconstructed 3D avatar models, and the running performance is faster than that of similar work. This research offers a useful tool for the manufacturers to quickly and economically create 3D avatars for products design, entertainment and online shopping.

  20. Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.

    PubMed

    Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry

    2012-12-01

    Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.

  1. ODTbrain: a Python library for full-view, dense diffraction tomography.

    PubMed

    Müller, Paul; Schürmann, Mirjam; Guck, Jochen

    2015-11-04

    Analyzing the three-dimensional (3D) refractive index distribution of a single cell makes it possible to describe and characterize its inner structure in a marker-free manner. A dense, full-view tomographic data set is a set of images of a cell acquired for multiple rotational positions, densely distributed from 0 to 360 degrees. The reconstruction is commonly realized by projection tomography, which is based on the inversion of the Radon transform. The reconstruction quality of projection tomography is greatly improved when first order scattering, which becomes relevant when the imaging wavelength is comparable to the characteristic object size, is taken into account. This advanced reconstruction technique is called diffraction tomography. While many implementations of projection tomography are available today, there is no publicly available implementation of diffraction tomography so far. We present a Python library that implements the backpropagation algorithm for diffraction tomography in 3D. By establishing benchmarks based on finite-difference time-domain (FDTD) simulations, we showcase the superiority of the backpropagation algorithm over the backprojection algorithm. Furthermore, we discuss how measurment parameters influence the reconstructed refractive index distribution and we also give insights into the applicability of diffraction tomography to biological cells. The present software library contains a robust implementation of the backpropagation algorithm. The algorithm is ideally suited for the application to biological cells. Furthermore, the implementation is a drop-in replacement for the classical backprojection algorithm and is made available to the large user community of the Python programming language.

  2. Interactive 3D Visualization: An Important Element in Dealing with Increasing Data Volumes and Decreasing Resources

    NASA Astrophysics Data System (ADS)

    Gee, L.; Reed, B.; Mayer, L.

    2002-12-01

    Recent years have seen remarkable advances in sonar technology, positioning capabilities, and computer processing power that have revolutionized the way we image the seafloor. The US Naval Oceanographic Office (NAVOCEANO) has updated its survey vessels and launches to the latest generation of technology and now possesses a tremendous ocean observing and mapping capability. However, the systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point be viewed. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor. The approach includes a unique data structure that enables the direct access to the full resolution data from the area based view, including a direct interface to target files and imagery snippets from mosaic and full resolution imagery. The increased data volumes to be processed also offered tremendous opportunities in terms of visualization and analysis, and interactive 3D presentation of the complex multi-attribute data provided a natural complement to the area based processing. If properly geo-referenced and treated, the complex data sets can be presented in a natural and intuitive manner that allows the integration of multiple components each at their inherent level of resolution and without compromising the quantitative nature of the data. Artificial sun-illumination, shading, and 3-D rendering are used with digital bathymetric data to form natural looking and easily interpretable, yet quantitative, landscapes that allow the user to rapidly identify the data requiring further processing or analysis. Color can be used to represent depth or other parameters (like backscatter, quality factors or sediment properties), which can be draped over the DTM, or high resolution imagery can be texture mapped on bathymetric data. The presentation will demonstrate the new approach of the integrated area based processing and 3D visualization with a number of data sets from recent surveys.

  3. Genre Matters: A Comparative Study on the Entertainment Effects of 3D in Cinematic Contexts

    NASA Astrophysics Data System (ADS)

    Ji, Qihao; Lee, Young Sun

    2014-09-01

    Built upon prior comparative studies of 3D and 2D films, the current project investigates the effects of 2D and 3D on viewers' perception of enjoyment, narrative engagement, presence, involvement, and flow across three movie genres (Action/fantasy vs. Drama vs. Documentary). Through a 2 by 3 mixed factorial design, participants (n = 102) were separated into two viewing conditions (2D and 3D) and watched three 15-min film segments. Result suggested both visual production methods are equally efficient in terms of eliciting people's enjoyment, narrative engagement, involvement, flow and presence, no effects of visual production method was found. In addition, through examining the genre effects in both 3D and 2D conditions, we found that 3D works better for action movies than documentaries in terms of eliciting viewers' perception of enjoyment and presence, similarly, it improves views' narrative engagement for documentaries than dramas substantially. Implications and limitations are discussed in detail.

  4. Pixel-level tunable liquid crystal lenses for auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Li, Kun; Robertson, Brian; Pivnenko, Mike; Chu, Daping; Zhou, Jiong; Yao, Jun

    2014-02-01

    Mobile video and gaming are now widely used, and delivery of a glass-free 3D experience is of both research and development interest. The key drawbacks of a conventional 3D display based on a static lenticular lenslet array and parallax barriers are low resolution, limited viewing angle and reduced brightness, mainly because of the need of multiple-pixels for each object point. This study describes the concept and performance of pixel-level cylindrical liquid crystal (LC) lenses, which are designed to steer light to the left and right eye sequentially to form stereo parallax. The width of the LC lenses can be as small as 20-30 μm, so that the associated auto-stereoscopic display will have the same resolution as the 2D display panel in use. Such a thin sheet of tunable LC lens array can be applied directly on existing mobile displays, and can deliver 3D viewing experience while maintaining 2D viewing capability. Transparent electrodes were laser patterned to achieve the single pixel lens resolution, and a high birefringent LC material was used to realise a large diffraction angle for a wide field of view. Simulation was carried out to model the intensity profile at the viewing plane and optimise the lens array based on the measured LC phase profile. The measured viewing angle and intensity profile were compared with the simulation results.

  5. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  6. Annular dynamics of memo3D annuloplasty ring evaluated by 3D transesophageal echocardiography.

    PubMed

    Nishi, Hiroyuki; Toda, Koichi; Miyagawa, Shigeru; Yoshikawa, Yasushi; Fukushima, Satsuki; Yoshioka, Daisuke; Sawa, Yoshiki

    2018-04-01

    We assessed the mitral annular motion after mitral valve repair with the Sorin Memo 3D® (Sorin Group Italia S.r.L., Saluggia, Italy), which is a unique complete semirigid annuloplasty ring intended to restore the systolic profile of the mitral annulus while adapting to the physiologic dynamism of the annulus, using transesophageal real-time three-dimensional echocardiography. 17 patients (12 male; mean age 60.4 ± 14.9 years) who underwent mitral annuloplasty using the Memo 3D ring were investigated. Mitral annular motion was assessed using QLAB®version8 allowing for a full evaluation of the mitral annulus dynamics. The mitral annular dimensions were measured throughout the cardiac cycle using 4D MV assessment2® while saddle shape was assessed through sequential measurements by RealView®. Saddle shape configuration of the mitral annulus and posterior and anterior leaflet motion could be observed during systole and diastole. The mitral annular area changed during the cardiac cycle by 5.7 ± 1.8%.The circumference length and diameter also changed throughout the cardiac cycle. The annular height was significantly higher in mid-systole than in mid-diastole (p < 0.05). The Memo 3D ring maintained a physiological saddle-shape configuration throughout the cardiac cycle. Real-time three-dimensional echocardiography analysis confirmed the motion and flexibility of the Memo 3D ring upon implantation.

  7. An efficient hole-filling method based on depth map in 3D view generation

    NASA Astrophysics Data System (ADS)

    Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.

  8. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  9. Overview of FTV (free-viewpoint television)

    NASA Astrophysics Data System (ADS)

    Tanimoto, Masayuki

    2010-07-01

    We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.

  10. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  11. 3. GENERAL VIEW OF SETTING OF TANK 0745C (ON LEFT). ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. GENERAL VIEW OF SETTING OF TANK 0745C (ON LEFT). TANK 0745B IS ON RIGHT. VIEW TO SOUTHWEST. - Rocky Mountain Arsenal, Gasoline Storage Tank, December Seventh Avenue & D Street, Commerce City, Adams County, CO

  12. Discovering hidden relationships between renal diseases and regulated genes through 3D network visualizations

    PubMed Central

    2010-01-01

    Background In a recent study, two-dimensional (2D) network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D) network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method) revealed that genes implicated in many diseases (non-specific genes) tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes) tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks. PMID:21070623

  13. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  14. Design of the computerized 3D endoscopic imaging system for delicate endoscopic surgery.

    PubMed

    Song, Chul-Gyu; Kang, Jin U

    2011-02-01

    This paper describes a 3D endoscopic video system designed to improve visualization and enhance the ability of the surgeon to perform delicate endoscopic surgery. In a comparison of the polarized and conventional electric shutter-type stereo imaging systems, the former was found to be superior in terms of both accuracy and speed for suturing and for the loop pass test. Among the groups performing loop passing and suturing, there was no significant difference in the task performance between the 2D and 3D modes, however, suturing was performed 15% (p < 0.05) faster in 3D mode by both groups. The results of our experiments show that the proposed 3D endoscopic system has a sufficiently wide viewing angle and zone for multi-viewing.

  15. Volumetric full-range magnetomotive optical coherence tomography

    PubMed Central

    Ahmad, Adeel; Kim, Jongsik; Shemonski, Nathan D.; Marjanovic, Marina; Boppart, Stephen A.

    2014-01-01

    Abstract. Magnetomotive optical coherence tomography (MM-OCT) can be utilized to spatially localize the presence of magnetic particles within tissues or organs. These magnetic particle-containing regions are detected by using the capability of OCT to measure small-scale displacements induced by the activation of an external electromagnet coil typically driven by a harmonic excitation signal. The constraints imposed by the scanning schemes employed and tissue viscoelastic properties limit the speed at which conventional MM-OCT data can be acquired. Realizing that electromagnet coils can be designed to exert MM force on relatively large tissue volumes (comparable or larger than typical OCT imaging fields of view), we show that an order-of-magnitude improvement in three-dimensional (3-D) MM-OCT imaging speed can be achieved by rapid acquisition of a volumetric scan during the activation of the coil. Furthermore, we show volumetric (3-D) MM-OCT imaging over a large imaging depth range by combining this volumetric scan scheme with full-range OCT. Results with tissue equivalent phantoms and a biological tissue are shown to demonstrate this technique. PMID:25472770

  16. MATHEMATICS OF SENSING, EXPLOITATION, AND EXECUTION (MSEE) Sensing, Exploitation, and Execution (SEE) on a Foundation for Representation, Inference, and Learning

    DTIC Science & Technology

    2016-07-01

    reconstruction, video synchronization, multi - view tracking, action recognition, reasoning with uncertainty 16. SECURITY CLASSIFICATION OF: 17...3.4.2. Human action recognition across multi - views ......................................................................................... 44 3.4.3...68 4.2.1. Multi - view Multi -object Tracking with 3D cues

  17. A Dynamic Multi-Projection-Contour Approximating Framework for the 3D Reconstruction of Buildings by Super-Generalized Optical Stereo-Pairs.

    PubMed

    Yan, Yiming; Su, Nan; Zhao, Chunhui; Wang, Liguo

    2017-09-19

    In this paper, a novel framework of the 3D reconstruction of buildings is proposed, focusing on remote sensing super-generalized stereo-pairs (SGSPs). As we all know, 3D reconstruction cannot be well performed using nonstandard stereo pairs, since reliable stereo matching could not be achieved when the image-pairs are collected at a great difference of views, and we always failed to obtain dense 3D points for regions of buildings, and cannot do further 3D shape reconstruction. We defined SGSPs as two or more optical images collected in less constrained views but covering the same buildings. It is even more difficult to reconstruct the 3D shape of a building by SGSPs using traditional frameworks. As a result, a dynamic multi-projection-contour approximating (DMPCA) framework was introduced for SGSP-based 3D reconstruction. The key idea is that we do an optimization to find a group of parameters of a simulated 3D model and use a binary feature-image that minimizes the total differences between projection-contours of the building in the SGSPs and that in the simulated 3D model. Then, the simulated 3D model, defined by the group of parameters, could approximate the actual 3D shape of the building. Certain parameterized 3D basic-unit-models of typical buildings were designed, and a simulated projection system was established to obtain a simulated projection-contour in different views. Moreover, the artificial bee colony algorithm was employed to solve the optimization. With SGSPs collected by the satellite and our unmanned aerial vehicle, the DMPCA framework was verified by a group of experiments, which demonstrated the reliability and advantages of this work.

  18. Integration of aerial oblique imagery and terrestrial imagery for optimized 3D modeling in urban areas

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric

    2018-05-01

    Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.

  19. View generation for 3D-TV using image reconstruction from irregularly spaced samples

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos

    2007-02-01

    Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).

  20. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    NASA Astrophysics Data System (ADS)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  1. An iterative algorithm for soft tissue reconstruction from truncated flat panel projections

    NASA Astrophysics Data System (ADS)

    Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.

    2006-03-01

    The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.

  2. New Dimensions of GIS Data: Exploring Virtual Reality (VR) Technology for Earth Science

    NASA Astrophysics Data System (ADS)

    Skolnik, S.; Ramirez-Linan, R.

    2016-12-01

    NASA's Science Mission Directorate (SMD) Earth Science Division (ESD) Earth Science Technology Office (ESTO) and Navteca are exploring virtual reality (VR) technology as an approach and technique related to the next generation of Earth science technology information systems. Having demonstrated the value of VR in viewing pre-visualized science data encapsulated in a movie representation of a time series, further investigation has led to the additional capability of permitting the observer to interact with the data, make selections, and view volumetric data in an innovative way. The primary objective of this project has been to investigate the use of commercially available VR hardware, the Oculus Rift and the Samsung Gear VR, for scientific analysis through an interface to ArcGIS to enable the end user to order and view data from the NASA Discover-AQ mission. A virtual console is presented through the VR interface that allows the user to select various layers of data from the server in both 2D, 3D, and full 4pi steradian views. By demonstrating the utility of VR in interacting with Discover-AQ flight mission measurements, and building on previous work done at the Atmospheric Science Data Center (ASDC) at NASA Langley supporting analysis of sources of CO2 during the Discover-AQ mission, the investigation team has shown the potential for VR as a science tool beyond simple visualization.

  3. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  4. Crime event 3D reconstruction based on incomplete or fragmentary evidence material--case report.

    PubMed

    Maksymowicz, Krzysztof; Tunikowski, Wojciech; Kościuk, Jacek

    2014-09-01

    Using our own experience in 3D analysis, the authors will demonstrate the possibilities of 3D crime scene and event reconstruction in cases where originally collected material evidence is largely insufficient. The necessity to repeat forensic evaluation is often down to the emergence of new facts in the course of case proceedings. Even in cases when a crime scene and its surroundings have undergone partial or complete transformation, with regard to elements significant to the course of the case, or when the scene was not satisfactorily secured, it is still possible to reconstruct it in a 3D environment based on the originally-collected, even incomplete, material evidence. In particular cases when no image of the crime scene is available, its partial or even full reconstruction is still potentially feasible. Credibility of evidence for such reconstruction can still satisfy the evidence requirements in court. Reconstruction of the missing elements of the crime scene is still possible with the use of information obtained from current publicly available databases. In the study, we demonstrate that these can include Google Maps(®*), Google Street View(®*) and available construction and architecture archives. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Large holographic 3D display for real-time computer-generated holography

    NASA Astrophysics Data System (ADS)

    Häussler, R.; Leister, N.; Stolle, H.

    2017-06-01

    SeeReal's concept of real-time holography is based on Sub-Hologram encoding and tracked Viewing Windows. This solution leads to significant reduction of pixel count and computation effort compared to conventional holography concepts. Since the first presentation of the concept, improved full-color holographic displays were built with dedicated components. The hologram is encoded on a spatial light modulator that is a sandwich of a phase-modulating and an amplitude-modulating liquid-crystal display and that modulates amplitude and phase of light. Further components are based on holographic optical elements for light collimation and focusing which are exposed in photopolymer films. Camera photographs show that only the depth region on which the focus of the camera lens is set is in focus while the other depth regions are out of focus. These photographs demonstrate that the 3D scene is reconstructed in depth and that accommodation of the eye lenses is supported. Hence, the display is a solution to overcome the accommodationconvergence conflict that is inherent for stereoscopic 3D displays. The main components, progress and results of the holographic display with 300 mm x 200 mm active area are described. Furthermore, photographs of holographic reconstructed 3D scenes are shown.

  6. Construction of Extended 3D Field of Views of the Internal Bladder Wall Surface: A Proof of Concept

    NASA Astrophysics Data System (ADS)

    Ben-Hamadou, Achraf; Daul, Christian; Soussen, Charles

    2016-09-01

    3D extended field of views (FOVs) of the internal bladder wall facilitate lesion diagnosis, patient follow-up and treatment traceability. In this paper, we propose a 3D image mosaicing algorithm guided by 2D cystoscopic video-image registration for obtaining textured FOV mosaics. In this feasibility study, the registration makes use of data from a 3D cystoscope prototype providing, in addition to each small FOV image, some 3D points located on the surface. This proof of concept shows that textured surfaces can be constructed with minimally modified cystoscopes. The potential of the method is demonstrated on numerical and real phantoms reproducing various surface shapes. Pig and human bladder textures are superimposed on phantoms with known shape and dimensions. These data allow for quantitative assessment of the 3D mosaicing algorithm based on the registration of images simulating bladder textures.

  7. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    PubMed

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Quantification of functional mitral regurgitation by real-time 3D echocardiography: comparison with 3D velocity-encoded cardiac magnetic resonance.

    PubMed

    Marsan, Nina Ajmone; Westenberg, Jos J M; Ypenburg, Claudia; Delgado, Victoria; van Bommel, Rutger J; Roes, Stijntje D; Nucifora, Gaetano; van der Geest, Rob J; de Roos, Albert; Reiber, Johan C; Schalij, Martin J; Bax, Jeroen J

    2009-11-01

    The aim of this study was to evaluate feasibility and accuracy of real-time 3-dimensional (3D) echocardiography for quantification of mitral regurgitation (MR), in a head-to-head comparison with velocity-encoded cardiac magnetic resonance (VE-CMR). Accurate grading of MR severity is crucial for appropriate patient management but remains challenging. VE-CMR with 3D three-directional acquisition has been recently proposed as the reference method. A total of 64 patients with functional MR were included. A VE-CMR acquisition was applied to quantify mitral regurgitant volume (Rvol). Color Doppler 3D echocardiography was applied for direct measurement, in "en face" view, of mitral effective regurgitant orifice area (EROA); Rvol was subsequently calculated as EROA multiplied by the velocity-time integral of the regurgitant jet on the continuous-wave Doppler. To assess the relative potential error of the conventional approach, color Doppler 2-dimensional (2D) echocardiography was performed: vena contracta width was measured in the 4-chamber view and EROA calculated as circular (EROA-4CH); EROA was also calculated as elliptical (EROA-elliptical), measuring vena contracta also in the 2-chamber view. From these 2D measurements of EROA, the Rvols were also calculated. The EROA measured by 3D echocardiography was significantly higher than EROA-4CH (p < 0.001) and EROA-elliptical (p < 0.001), with a significant bias between these measurements (0.10 cm(2) and 0.06 cm(2), respectively). Rvol measured by 3D echocardiography showed excellent correlation with Rvol measured by CMR (r = 0.94), without a significant difference between these techniques (mean difference = -0.08 ml/beat). Conversely, 2D echocardiographic approach from the 4-chamber view significantly underestimated Rvol (p = 0.006) as compared with CMR (mean difference = 2.9 ml/beat). The 2D elliptical approach demonstrated a better agreement with CMR (mean difference = -1.6 ml/beat, p = 0.04). Quantification of EROA and Rvol of functional MR with 3D echocardiography is feasible and accurate as compared with VE-CMR; the currently recommended 2D echocardiographic approach significantly underestimates both EROA and Rvol.

  9. Detection and 3D reconstruction of traffic signs from multiple view color images

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno

    2013-03-01

    3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for position and 4.5° for orientation.

  10. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  11. A View to the Future: A Novel Approach for 3D-3D Superimposition and Quantification of Differences for Identification from Next-Generation Video Surveillance Systems.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    Techniques of 2D-3D superimposition are widely used in cases of personal identification from video surveillance systems. However, the progressive improvement of 3D image acquisition technology will enable operators to perform also 3D-3D facial superimposition. This study aims at analyzing the possible applications of 3D-3D superimposition to personal identification, although from a theoretical point of view. Twenty subjects underwent a facial 3D scan by stereophotogrammetry twice at different time periods. Scans were superimposed two by two according to nine landmarks, and root-mean-square (RMS) value of point-to-point distances was calculated. When the two superimposed models belonged to the same individual, RMS value was 2.10 mm, while it was 4.47 mm in mismatches with a statistically significant difference (p < 0.0001). This experiment shows the potential of 3D-3D superimposition: Further studies are needed to ascertain technical limits which may occur in practice and to improve methods useful in the forensic practice. © 2016 American Academy of Forensic Sciences.

  12. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  13. Accommodation measurements of horizontally scanning holographic display.

    PubMed

    Takaki, Yasuhiro; Yokouchi, Masahito

    2012-02-13

    Eye accommodation is considered to function properly for three-dimensional (3D) images generated by holography. We developed a horizontally scanning holographic display technique that enlarges both the screen size and viewing zone angle. A 3D image generated by this technique can be easily seen by both eyes. In this study, we measured the accommodation responses to a 3D image generated by the horizontally scanning holographic display technique that has a horizontal viewing zone angle of 14.6° and screen size of 4.3 in. We found that the accommodation responses to a 3D image displayed within 400 mm from the display screen were similar to those of a real object.

  14. Full resolution hologram-like autostereoscopic display

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Jesse B.; Hutchins, Jamie

    1995-01-01

    Under this program, Dimension Technologies Inc. (DTI) developed a prototype display that uses a proprietary illumination technique to create autostereoscopic hologram-like full resolution images on an LCD operating at 180 fps. The resulting 3D image possesses a resolution equal to that of the LCD along with properties normally associated with holograms, including change of perspective with observer position and lack of viewing position restrictions. Furthermore, this autostereoscopic technique eliminates the need to wear special glasses to achieve the parallax effect. Under the program a prototype display was developed which demonstrates the hologram-like full resolution concept. To implement such a system, DTI explored various concept designs and enabling technologies required to support those designs. Specifically required were: a parallax illumination system with sufficient brightness and control; an LCD with rapid address and pixel response; and an interface to an image generation system for creation of computer graphics. Of the possible parallax illumination system designs, we chose a design which utilizes an array of fluorescent lamps. This system creates six sets of illumination areas to be imaged behind an LCD. This controlled illumination array is interfaced to a lenticular lens assembly which images the light segments into thin vertical light lines to achieve the parallax effect. This light line formation is the foundation of DTI's autostereoscopic technique. The David Sarnoff Research Center (Sarnoff) was subcontracted to develop an LCD that would operate with a fast scan rate and pixel response. Sarnoff chose a surface mode cell technique and produced the world's first large area pi-cell active matrix TFT LCD. The device provided adequate performance to evaluate five different perspective stereo viewing zones. A Silicon Graphics' Iris Indigo system was used for image generation which allowed for static and dynamic multiple perspective image rendering. During the development of the prototype display, we identified many critical issues associated with implementing such a technology. Testing and evaluation enabled us to prove that this illumination technique provides autostereoscopic 3D multi perspective images with a wide range of view, smooth transition, and flickerless operation given suitable enabling technologies.

  15. A new AS-display as part of the MIRO lightweight robot for surgical applications

    NASA Astrophysics Data System (ADS)

    Grossmann, Christoph M.

    2010-02-01

    The DLR MIRO is the second generation of versatile robot arms for surgical applications, developed at the Institute for Robotics and Mechatronics at Deutsche Zentrum für Luft- und Raumfahrt (DLR) in Oberpfaffenhofen, Germany. With its low weight of 10 kg and dimensions similar to those of the human arm, the MIRO robot can assist the surgeon directly at the operating table where space is scarce. The planned scope of applications of this robot arm ranges from guiding a laser unit for the precise separation of bone tissue in orthopedics to positioning holes for bone screws, robot assisted endoscope guidance and on to the multi-robot concept for endoscopic minimally invasive surgery. A stereo-endoscope delivers two full HD video streams that can even be augmented with information, e.g vectors indicating the forces that act on the surgical tool at any given moment. SeeFront's new autostereoscopic 3D display SF 2223, being a part of the MIRO assembly, will let the surgeon view the stereo video stream in excellent quality, in real time and without the need for any viewing aids. The presentation is meant to provide an insight into the principles at the basis of the SeeFront 3D technology and how they allow the creation of autostereoscopic display solutions ranging from smallest "stamp-sized" displays to 30" desktop versions, which all provide comfortable freedom of movement for the viewer along with excellent 3D image quality.

  16. Correlation between a 2D Channelized Hotelling Observer and Human Observers in a Low-contrast Detection Task with Multi-slice Reading in CT

    PubMed Central

    Yu, Lifeng; Chen, Baiyu; Kofler, James M.; Favazza, Christopher P.; Leng, Shuai; Kupinski, Matthew A.; McCollough, Cynthia H.

    2017-01-01

    Purpose Model observers have been successfully developed and used to assess the quality of static 2D CT images. However, radiologists typically read images by paging through multiple 2D slices (i.e. multi-slice reading). The purpose of this study was to correlate human and model observer performance in a low-contrast detection task performed using both 2D and multi-slice reading, and to determine if the 2D model observer still correlate well with human observer performance in multi-slice reading. Methods A phantom containing 18 low-contrast spheres (6 sizes × 3 contrast levels) was scanned on a 192-slice CT scanner at 5 dose levels (CTDIvol = 27, 13.5, 6.8, 3.4, and 1.7 mGy), each repeated 100 times. Images were reconstructed using both filtered-backprojection (FBP) and an iterative reconstruction (IR) method (ADMIRE, Siemens). A 3D volume of interest (VOI) around each sphere was extracted and placed side-by-side with a signal-absent VOI to create a 2-alternative forced choice (2AFC) trial. Sixteen 2AFC studies were generated, each with 100 trials, to evaluate the impact of radiation dose, lesion size and contrast, and reconstruction methods on object detection. In total, 1600 trials were presented to both model and human observers. Three medical physicists acted as human observers and were allowed to page through the 3D volumes to make a decision for each 2AFC trial. The human observer performance was compared with the performance of a multi-slice channelized Hotelling observer (CHO_MS), which integrates multi-slice image data, and with the performance of previously validated CHO, which operates on static 2D images (CHO_2D). For comparison, the same 16 2AFC studies were also performed in a 2D viewing mode by the human observers and compared with the multi-slice viewing performance and the two CHO models. Results Human observer performance was well correlated with the CHO_2D performance in the 2D viewing mode (Pearson product-moment correlation coefficient R=0.972, 95% confidence interval (CI): 0.919 to 0.990) and with the CHO_MS performance in the multi-slice viewing mode (R=0.952, 95% CI: 0.865 to 0.984). The CHO_2D performance, calculated from the 2D viewing mode, also had a strong correlation with human observer performance in the multi-slice viewing mode (R=0.957, 95% CI: 879 to 0.985). Human observer performance varied between the multi-slice and 2D modes. One reader performed better in the multi-slice mode (p=0.013); whereas the other two readers showed no significant difference between the two viewing modes (p=0.057 and p=0.38). Conclusions A 2D CHO model is highly correlated with human observer performance in detecting spherical low contrast objects in multi-slice viewing of CT images. This finding provides some evidence for the use of a simpler, 2D CHO to assess image quality in clinically relevant CT tasks where multi-slice viewing is used. PMID:28555878

  17. Correlation between a 2D channelized Hotelling observer and human observers in a low-contrast detection task with multislice reading in CT.

    PubMed

    Yu, Lifeng; Chen, Baiyu; Kofler, James M; Favazza, Christopher P; Leng, Shuai; Kupinski, Matthew A; McCollough, Cynthia H

    2017-08-01

    Model observers have been successfully developed and used to assess the quality of static 2D CT images. However, radiologists typically read images by paging through multiple 2D slices (i.e., multislice reading). The purpose of this study was to correlate human and model observer performance in a low-contrast detection task performed using both 2D and multislice reading, and to determine if the 2D model observer still correlate well with human observer performance in multislice reading. A phantom containing 18 low-contrast spheres (6 sizes × 3 contrast levels) was scanned on a 192-slice CT scanner at five dose levels (CTDI vol = 27, 13.5, 6.8, 3.4, and 1.7 mGy), each repeated 100 times. Images were reconstructed using both filtered-backprojection (FBP) and an iterative reconstruction (IR) method (ADMIRE, Siemens). A 3D volume of interest (VOI) around each sphere was extracted and placed side-by-side with a signal-absent VOI to create a 2-alternative forced choice (2AFC) trial. Sixteen 2AFC studies were generated, each with 100 trials, to evaluate the impact of radiation dose, lesion size and contrast, and reconstruction methods on object detection. In total, 1600 trials were presented to both model and human observers. Three medical physicists acted as human observers and were allowed to page through the 3D volumes to make a decision for each 2AFC trial. The human observer performance was compared with the performance of a multislice channelized Hotelling observer (CHO_MS), which integrates multislice image data, and with the performance of previously validated CHO, which operates on static 2D images (CHO_2D). For comparison, the same 16 2AFC studies were also performed in a 2D viewing mode by the human observers and compared with the multislice viewing performance and the two CHO models. Human observer performance was well correlated with the CHO_2D performance in the 2D viewing mode [Pearson product-moment correlation coefficient R = 0.972, 95% confidence interval (CI): 0.919 to 0.990] and with the CHO_MS performance in the multislice viewing mode (R = 0.952, 95% CI: 0.865 to 0.984). The CHO_2D performance, calculated from the 2D viewing mode, also had a strong correlation with human observer performance in the multislice viewing mode (R = 0.957, 95% CI: 879 to 0.985). Human observer performance varied between the multislice and 2D modes. One reader performed better in the multislice mode (P = 0.013); whereas the other two readers showed no significant difference between the two viewing modes (P = 0.057 and P = 0.38). A 2D CHO model is highly correlated with human observer performance in detecting spherical low contrast objects in multislice viewing of CT images. This finding provides some evidence for the use of a simpler, 2D CHO to assess image quality in clinically relevant CT tasks where multislice viewing is used. © 2017 American Association of Physicists in Medicine.

  18. Analysis of 3D Scan Measurement Distribution with Application to a Multi-Beam Lidar on a Rotating Platform.

    PubMed

    Morales, Jesús; Plaza-Leiva, Victoria; Mandow, Anthony; Gomez-Ruiz, Jose Antonio; Serón, Javier; García-Cerezo, Alfonso

    2018-01-30

    Multi-beam lidar (MBL) rangefinders are becoming increasingly compact, light, and accessible 3D sensors, but they offer limited vertical resolution and field of view. The addition of a degree-of-freedom to build a rotating multi-beam lidar (RMBL) has the potential to become a common solution for affordable rapid full-3D high resolution scans. However, the overlapping of multiple-beams caused by rotation yields scanning patterns that are more complex than in rotating single beam lidar (RSBL). In this paper, we propose a simulation-based methodology to analyze 3D scanning patterns which is applied to investigate the scan measurement distribution produced by the RMBL configuration. With this purpose, novel contributions include: (i) the adaption of a recent spherical reformulation of Ripley's K function to assess 3D sensor data distribution on a hollow sphere simulation; (ii) a comparison, both qualitative and quantitative, between scan patterns produced by an ideal RMBL based on a Velodyne VLP-16 (Puck) and those of other 3D scan alternatives (i.e., rotating 2D lidar and MBL); and (iii) a new RMBL implementation consisting of a portable tilting platform for VLP-16 scanners, which is presented as a case study for measurement distribution analysis as well as for the discussion of actual scans from representative environments. Results indicate that despite the particular sampling patterns given by a RMBL, its homogeneity even improves that of an equivalent RSBL.

  19. Analysis of 3D Scan Measurement Distribution with Application to a Multi-Beam Lidar on a Rotating Platform

    PubMed Central

    Plaza-Leiva, Victoria; Serón, Javier

    2018-01-01

    Multi-beam lidar (MBL) rangefinders are becoming increasingly compact, light, and accessible 3D sensors, but they offer limited vertical resolution and field of view. The addition of a degree-of-freedom to build a rotating multi-beam lidar (RMBL) has the potential to become a common solution for affordable rapid full-3D high resolution scans. However, the overlapping of multiple-beams caused by rotation yields scanning patterns that are more complex than in rotating single beam lidar (RSBL). In this paper, we propose a simulation-based methodology to analyze 3D scanning patterns which is applied to investigate the scan measurement distribution produced by the RMBL configuration. With this purpose, novel contributions include: (i) the adaption of a recent spherical reformulation of Ripley’s K function to assess 3D sensor data distribution on a hollow sphere simulation; (ii) a comparison, both qualitative and quantitative, between scan patterns produced by an ideal RMBL based on a Velodyne VLP-16 (Puck) and those of other 3D scan alternatives (i.e., rotating 2D lidar and MBL); and (iii) a new RMBL implementation consisting of a portable tilting platform for VLP-16 scanners, which is presented as a case study for measurement distribution analysis as well as for the discussion of actual scans from representative environments. Results indicate that despite the particular sampling patterns given by a RMBL, its homogeneity even improves that of an equivalent RSBL. PMID:29385705

  20. SCEC-VDO: A New 3-Dimensional Visualization and Movie Making Software for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Sanskriti, F.; Yu, J.; Callaghan, S.; Maechling, P. J.; Jordan, T. H.

    2016-12-01

    Researchers and undergraduate interns at the Southern California Earthquake Center (SCEC) have created a new 3-dimensional (3D) visualization software tool called SCEC Virtual Display of Objects (SCEC-VDO). SCEC-VDO is written in Java and uses the Visualization Toolkit (VTK) backend to render 3D content. SCEC-VDO offers advantages over existing 3D visualization software for viewing georeferenced data beneath the Earth's surface. Many popular visualization packages, such as Google Earth, restrict the user to views of the Earth from above, obstructing views of geological features such as faults and earthquake hypocenters at depth. SCEC-VDO allows the user to view data both above and below the Earth's surface at any angle. It includes tools for viewing global earthquakes from the U.S. Geological Survey, faults from the SCEC Community Fault Model, and results from the latest SCEC models of earthquake hazards in California including UCERF3 and RSQSim. Its object-oriented plugin architecture allows for the easy integration of new regional and global datasets, regardless of the science domain. SCEC-VDO also features rich animation capabilities, allowing users to build a timeline with keyframes of camera position and displayed data. The software is built with the concept of statefulness, allowing for reproducibility and collaboration using an xml file. A prior version of SCEC-VDO, which began development in 2005 under the SCEC Undergraduate Studies in Earthquake Information Technology internship, used the now unsupported Java3D library. Replacing Java3D with the widely supported and actively developed VTK libraries not only ensures that SCEC-VDO can continue to function for years to come, but allows for the export of 3D scenes to web viewers and popular software such as Paraview. SCEC-VDO runs on all recent 64-bit Windows, Mac OS X, and Linux systems with Java 8 or later. More information, including downloads, tutorials, and example movies created fully within SCEC-VDO is available here: http://scecvdo.usc.edu

  1. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  2. Near-isotropic 3D optical nanoscopy with photon-limited chromophores

    PubMed Central

    Tang, Jianyong; Akerboom, Jasper; Vaziri, Alipasha; Looger, Loren L.; Shank, Charles V.

    2010-01-01

    Imaging approaches based on single molecule localization break the diffraction barrier of conventional fluorescence microscopy, allowing for bioimaging with nanometer resolution. It remains a challenge, however, to precisely localize photon-limited single molecules in 3D. We have developed a new localization-based imaging technique achieving almost isotropic subdiffraction resolution in 3D. A tilted mirror is used to generate a side view in addition to the front view of activated single emitters, allowing their 3D localization to be precisely determined for superresolution imaging. Because both front and side views are in focus, this method is able to efficiently collect emitted photons. The technique is simple to implement on a commercial fluorescence microscope, and especially suitable for biological samples with photon-limited chromophores such as endogenously expressed photoactivatable fluorescent proteins. Moreover, this method is relatively resistant to optical aberration, as it requires only centroid determination for localization analysis. Here we demonstrate the application of this method to 3D imaging of bacterial protein distribution and neuron dendritic morphology with subdiffraction resolution. PMID:20472826

  3. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  4. Detailed analysis of an optimized FPP-based 3D imaging system

    NASA Astrophysics Data System (ADS)

    Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges

    2016-05-01

    In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.

  5. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  6. Satisfactory rate of postprocessing visualization of standard fetal cardiac views from 4-dimensional cardiac volumes acquired during routine ultrasound practice by experienced sonographers in peripheral centers.

    PubMed

    Rizzo, Giuseppe; Capponi, Alessandra; Pietrolucci, Maria Elena; Capece, Giuseppe; Cimmino, Ernesto; Colosi, Enrico; Ferrentino, Salvatore; Sica, Carmine; Di Meglio, Aniello; Arduini, Domenico

    2011-01-01

    The aim of this study was to evaluate the feasibility of visualizing standard cardiac views from 4-dimensional (4D) cardiac volumes obtained at ultrasound facilities with no specific experience in fetal echocardiography. Five sonographers prospectively recorded 4D cardiac volumes starting from the 4-chamber view on 500 consecutive pregnancies at 19 to 24 weeks' gestation undergoing routine ultrasound examinations (100 pregnancies for each sonographer). Volumes were sent to the referral center, and 2 independent reviewers with experience in 4D fetal echocardiography assessed their quality in the display of the abdominal view, 4-chamber view, left and right ventricular outflow tracts, and 3-vessel and trachea view. Cardiac volumes were acquired in 474 of 500 pregnancies (94.8%). The 2 reviewers respectively acknowledged the presence of satisfactory images in 92.4% and 93.6% of abdominal views, 91.5% and 93.0% of 4-chamber views, in 85.0% and 86.2% of left ventricular outflow tracts, 83.9% and 84.5% of right ventricular outflow tracts, and 85.2% and 84.5% of 3-vessel and trachea views. The presence of a maternal body mass index of greater than 30 altered the probability of achieving satisfactory cardiac views, whereas previous maternal lower abdominal surgery did not affect the quality of reconstructed cardiac views. In conclusion, cardiac volumes acquired by 4D sonography in peripheral centers showed high enough quality to allow satisfactory diagnostic cardiac views.

  7. Electrostatic analyzer with a 3-D instantaneous field of view for fast measurements of plasma distribution functions in space

    NASA Astrophysics Data System (ADS)

    Morel, X.; Berthomier, M.; Berthelier, J.-J.

    2017-03-01

    We describe the concept and properties of a new electrostatic optic which aims to provide a 2π sr instantaneous field of view to characterize space plasmas. It consists of a set of concentric toroidal electrodes that form a number of independent energy-selective channels. Charged particles are deflected toward a common imaging planar detector. The full 3-D distribution function of charged particles is obtained through a single energy sweep. Angle and energy resolution of the optics depends on the number of toroidal electrodes, on their radii of curvature, on their spacing, and on the angular aperture of the channels. We present the performances, as derived from numerical simulations, of an initial implementation of this concept that would fit the need of many space plasma physics applications. The proposed instrument has 192 entrance windows corresponding to eight polar channels each with 24 azimuthal sectors. The initial version of this 3-D plasma analyzer may cover energies from a few eV up to 30 keV, typically with a channel-dependent energy resolution varying from 10% to 7%. The angular acceptance varies with the direction of the incident particle from 3° to 12°. With a total geometric factor of two sensor heads reaching 0.23 cm2 sr eV/eV, this "donut" shape analyzer has enough sensitivity to allow very fast measurements of plasma distribution functions in most terrestrial and planetary environments on three-axis stabilized as well as on spinning satellites.

  8. America National Parks Viewed in 3D by NASA MISR Anaglyph 2

    NASA Image and Video Library

    2016-08-25

    Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite is releasing four new anaglyphs that showcase 33 of our nation's national parks, monuments, historical sites and recreation areas in glorious 3D. Shown in the annotated image are Grand Teton National Park, John D. Rockefeller Memorial Parkway, Yellowstone National Park, and parts of Craters of the Moon National Monument. MISR views Earth with nine cameras pointed at different angles, giving it the unique capability to produce anaglyphs, stereoscopic images that allow the viewer to experience the landscape in three dimensions. The anaglyphs were made by combining data from MISR's vertical-viewing and 46-degree forward-pointing camera. You will need red-blue glasses in order to experience the 3D effect; ensure you place the red lens over your left eye. The images have been rotated so that north is to the left in order to enable 3D viewing because the Terra satellite flies from north to south. All of the images are 235 miles (378 kilometers) from west to east. These data were acquired June 25, 2016, Orbit 87876. http://photojournal.jpl.nasa.gov/catalog/PIA20890

  9. Echocardiographic anatomy of the mitral valve: a critical appraisal of 2-dimensional imaging protocols with a 3-dimensional perspective.

    PubMed

    Mahmood, Feroze; Hess, Philip E; Matyal, Robina; Mackensen, G Burkhard; Wang, Angela; Qazi, Aisha; Panzica, Peter J; Lerner, Adam B; Maslow, Andrew

    2012-10-01

    To highlight the limitations of traditional 2-dimensional (2D) echocardiographic mitral valve (MV) examination methodologies, which do not account for patient-specific transesophageal echocardiographic (TEE) probe adjustments made during an actual clinical perioperative TEE examination. Institutional quality-improvement project. Tertiary care hospital. Attending anesthesiologists certified by the National Board of Echocardiography. Using the technique of multiplanar reformatting with 3-dimensional (3D) data, ambiguous 2D images of the MV were generated, which resembled standard midesophageal 2D views. Based on the 3D image, the MV scallops visualized in each 2D image were recognized exactly by the position of the scan plane. Twenty-three such 2D MV images were created in a presentation from the 3D datasets. Anesthesia staff members (n = 13) were invited to view the presentation based on the 2D images only and asked to identify the MV scallops. Their responses were scored as correct or incorrect based on the 3D image. The overall accuracy was 30.4% in identifying the MV scallops. The transcommissural view was identified correctly >90% of the time. The accuracy of the identification of A1, A3, P1, and P3 scallops was <50%. The accuracy of the identification of A2P2 scallops was ≥50%. In the absence of information on TEE probe adjustments performed to acquire a specific MV image, it is possible to misidentify the scallops. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Using Heat Pulses for Quantifying 3d Seepage Velocity in Groundwater-Surface Water Interactions, Considering Source Size, Regime, and Dispersion

    NASA Astrophysics Data System (ADS)

    Zlotnik, V. A.; Tartakovsky, D. M.

    2017-12-01

    The study is motivated by rapid proliferation of field methods for measurements of seepage velocity using heat tracing and is directed to broadening their potential for studies of groundwater-surface water interactions, and hyporheic zone in particular. In vast majority, existing methods assume vertical or horizontal, uniform, 1D seepage velocity. Often, 1D transport assumed as well, and analytical models of heat transport by Suzuki-Stallman are heavily used to infer seepage velocity. However, both of these assumptions (1D flow and 1D transport) are violated due to the flow geometry, media heterogeneity, and localized heat sources. Attempts to apply more realistic conceptual models still lack full 3D view, and known 2D examples are treated numerically, or by making additional simplifying assumptions about velocity orientation. Heat pulse instruments and sensors already offer an opportunity to collect data sufficient for 3D seepage velocity identification at appropriate scale, but interpretation tools for groundwater-surface water interactions in 3D have not been developed yet. We propose an approach that can substantially improve capabilities of already existing field instruments without additional measurements. Proposed closed-form analytical solutions are simple and well suited for using in inverse modeling. Field applications and ramifications for applications, including data analysis are discussed. The approach simplifies data collection, determines 3D seepage velocity, and facilitates interpretation of relations between heat transport parameters, fluid flow, and media properties. Results are obtained using tensor properties of transport parameters, Green's functions, and rotational coordinate transformations using the Euler angles

  11. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  12. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  13. Creating 3D visualizations of MRI data: A brief guide.

    PubMed

    Madan, Christopher R

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D 'glass brain' rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study's findings.

  14. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  15. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  16. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  17. Bright field segmentation tomography (BFST) for use as surface identification in stereomicroscopy

    NASA Astrophysics Data System (ADS)

    Thiesse, Jacqueline R.; Namati, Eman; de Ryk, Jessica; Hoffman, Eric A.; McLennan, Geoffrey

    2004-07-01

    Stereomicroscopy is an important method for use in image acquisition because it provides a 3D image of an object when other microscopic techniques can only provide the image in 2D. One challenge that is being faced with this type of imaging is determining the top surface of a sample that has otherwise indistinguishable surface and planar characteristics. We have developed a system that creates oblique illumination and in conjunction with image processing, the top surface can be viewed. The BFST consists of the Leica MZ12 stereomicroscope with a unique attached lighting source. The lighting source consists of eight light emitting diodes (LED's) that are separated by 45-degree angles. Each LED in this system illuminates with a 20-degree viewing angle once per cycle with a shadow over the rest of the sample. Subsequently, eight segmented images are taken per cycle. After the images are captured they are stacked through image addition to achieve the full field of view, and the surface is then easily identified. Image processing techniques, such as skeletonization can be used for further enhancement and measurement. With the use of BFST, advances can be made in detecting surface features from metals to tissue samples, such as in the analytical assessment of pulmonary emphysema using the technique of mean linear intercept.

  18. Cross-Domain Multi-View Object Retrieval via Multi-Scale Topic Models.

    PubMed

    Hong, Richang; Hu, Zhenzhen; Wang, Ruxin; Wang, Meng; Tao, Dacheng

    2016-09-27

    The increasing number of 3D objects in various applications has increased the requirement for effective and efficient 3D object retrieval methods, which attracted extensive research efforts in recent years. Existing works mainly focus on how to extract features and conduct object matching. With the increasing applications, 3D objects come from different areas. In such circumstances, how to conduct object retrieval becomes more important. To address this issue, we propose a multi-view object retrieval method using multi-scale topic models in this paper. In our method, multiple views are first extracted from each object, and then the dense visual features are extracted to represent each view. To represent the 3D object, multi-scale topic models are employed to extract the hidden relationship among these features with respected to varied topic numbers in the topic model. In this way, each object can be represented by a set of bag of topics. To compare the objects, we first conduct topic clustering for the basic topics from two datasets, and then generate the common topic dictionary for new representation. Then, the two objects can be aligned to the same common feature space for comparison. To evaluate the performance of the proposed method, experiments are conducted on two datasets. The 3D object retrieval experimental results and comparison with existing methods demonstrate the effectiveness of the proposed method.

  19. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  20. Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type.

    PubMed

    Okoshi, T; Oshima, K

    1976-04-01

    In ordinary holography reconstructing a virtual image, the hologram must be wider than either the visual field or the viewing zone. In this paper, an economical method of recording a wide-viewing-zone wide-visual-field 3-D holographic image is proposed. In this method, many mirrors are used to collect object waves onto a small hologram. In the reconstruction, a real image from the hologram is projected onto a horizontally direction-selective stereoscreen through the same mirrors. In the experiment, satisfactory 3-D images have been observed from a wide viewing zone. The optimum design and information reduction techniques are also discussed.

  1. Berries on the Ground 2 3-D

    NASA Image and Video Library

    2004-02-12

    This 3-D anaglyph, from NASA Mars Exploration Rover Spirit, shows a microscopic image taken of soil featuring round, blueberry-shaped rock formations on the crater floor at Meridiani Planum, Mars. 3D glasses are necessary to view this image.

  2. Automatic image database generation from CAD for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.

    1993-06-01

    The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.

  3. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  4. Event Display for the Visualization of CMS Events

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.

    2011-12-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  5. Fast ion transport during applied 3D magnetic perturbations on DIII-D

    DOE PAGES

    Van Zeeland, Michael A.; Ferraro, Nathaniel M.; Grierson, Brian A.; ...

    2015-06-26

    In this paper, measurements show fast ion losses correlated with applied three-dimensional (3D) fields in a variety of plasmas ranging from L-mode to resonant magnetic perturbation (RMP) edge localized mode (ELM) suppressed H-mode discharges. In DIII-D L-mode discharges with a slowly rotatingmore » $n=2$ magnetic perturbation, scintillator detector loss signals synchronized with the applied fields are observed to decay within one poloidal transit time after beam turn-off indicating they arise predominantly from prompt loss orbits. Full orbit following using M3D-C1 calculations of the perturbed fields and kinetic profiles reproduce many features of the measured losses and points to the importance of the applied 3D field phase with respect to the beam injection location in determining the overall impact on prompt beam ion loss. Modeling of these results includes a self-consistent calculation of the 3D perturbed beam ion birth profiles and scrape-off-layer ionization, a factor found to be essential to reproducing the experimental measurements. Extension of the simulations to full slowing down timescales, including fueling and the effects of drag and pitch angle scattering, show the applied $n=3$ RMPs in ELM suppressed H-mode plasmas can induce a significant loss of energetic particles from the core. With the applied $n=3$ fields, up to 8.4% of the injected beam power is predicted to be lost, compared to 2.7% with axisymmetric fields only. These fast ions, originating from minor radii $$\\rho >0.7$$ , are predicted to be primarily passing particles lost to the divertor region, consistent with wide field-of-view infrared periscope measurements of wall heating in $n=3$ RMP ELM suppressed plasmas. Edge fast ion $${{\\text{D}}_{\\alpha}}$$ (FIDA) measurements also confirm a large change in edge fast ion profile due to the $n=3$ fields, where the effect was isolated by using short 50 ms RMP-off periods during which ELM suppression was maintained yet the fast ion profile was allowed to recover. Finally, the role of resonances between fast ion drift motion and the applied 3D fields in the context of selectively targeting regions of fast ion phase space is also discussed.« less

  6. Multiview three-dimensional display with continuous motion parallax through planar aligned OLED microdisplays.

    PubMed

    Teng, Dongdong; Xiong, Yi; Liu, Lilin; Wang, Biao

    2015-03-09

    Existing multiview three-dimensional (3D) display technologies encounter discontinuous motion parallax problem, due to a limited number of stereo-images which are presented to corresponding sub-viewing zones (SVZs). This paper proposes a novel multiview 3D display system to obtain continuous motion parallax by using a group of planar aligned OLED microdisplays. Through blocking partial light-rays by baffles inserted between adjacent OLED microdisplays, transitional stereo-image assembled by two spatially complementary segments from adjacent stereo-images is presented to a complementary fusing zone (CFZ) which locates between two adjacent SVZs. For a moving observation point, the spatial ratio of the two complementary segments evolves gradually, resulting in continuously changing transitional stereo-images and thus overcoming the problem of discontinuous motion parallax. The proposed display system employs projection-type architecture, taking the merit of full display resolution, but at the same time having a thin optical structure, offering great potentials for portable or mobile 3D display applications. Experimentally, a prototype display system is demonstrated by 9 OLED microdisplays.

  7. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.

    PubMed

    Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard

    2005-03-09

    Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  8. Dense 3D Face Alignment from 2D Video for Real-Time Use

    PubMed Central

    Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo

    2018-01-01

    To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:29731533

  9. Three-dimensional display of cortical anatomy and vasculature: MR angiography versus multimodality integration

    NASA Astrophysics Data System (ADS)

    Henri, Christopher J.; Pike, Gordon; Collins, D. Louis; Peters, Terence M.

    1990-07-01

    We present two methods for acquiring and viewing integrated 3-D images of cerebral vasculature and cortical anatomy. The aim of each technique is to provide the neurosurgeon or radiologist with a 3-D image containing information which cannot ordinarily be obtained from a single imaging modality. The first approach employs recent developments in MR which is now capable of imaging flowing blood as well as static tissue. Here, true 3-D data are acquired and displayed using volume or surface rendering techniques. The second approach is based on the integration of x-ray projection angiograms and tomographic image data, allowing a composite image of anatomy and vasculature to be viewed in 3-D. This is accomplished by superimposing an angiographic stereo-pair onto volume rendered images of either CT or MR data created from matched viewing geometries. The two approaches are outlined and compared. Results are presented for each technique and potential clinical applications discussed.

  10. Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope

    PubMed Central

    Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok

    2017-01-01

    Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243

  11. Lithospheric layering in the North American craton revealed by including Short Period Constraints in Full Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2017-12-01

    Recent receiver function studies of the North American craton suggest the presence of significant layering within the cratonic lithosphere, with significant lateral variations in the depth of the velocity discontinuities. These structural boundaries have been confirmed recently using a transdimensional Markov Chain Monte Carlo approach (TMCMC), inverting surface wave dispersion data and converted phases simultaneously (Calò et al., 2016; Roy and Romanowicz 2017). The lateral resolution of upper mantle structure can be improved with a high density of broadband seismic stations, or with a sparse network using full waveform inversion based on numerical wavefield computation methods such as the Spectral Element Method (SEM). However, inverting for discontinuities with strong topography such as MLDS's or LAB, presents challenges in an inversion framework, both computationally, due to the short periods required, and from the point of view of stability of the inversion. To overcome these limitations, and to improve resolution of layering in the upper mantle, we are developing a methodology that combines full waveform inversion tomography and information provided by short period seismic observables. We have extended the 30 1D radially anisotropic shear velocity profiles of Calò et al. 2016 to several other stations, for which we used a recent shear velocity model (Clouzet et al., 2017) as constraint in the modeling. These 1D profiles, including both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) homogenization of the layered 1D models and 2) interpolation between the 1D smooth profiles and the model of Clouzet et al. 2017, resulting in a smooth 3D starting model. Waveforms used in the inversion are filtered at periods longer than 30s. We use the SEM code "RegSEM" for forward computations and a quasi-Newton inversion approach in which kernels are computed using normal mode perturbation theory. The resulting volumetric velocity perturbations around the homogenized starting model are then added to the discontinuous 3D starting model by dehomogenizing the model. We present here the first results of such an approach for refining structure in the North American continent.

  12. Surface-Plasmon Holography with White-Light Illumination

    NASA Astrophysics Data System (ADS)

    Ozaki, Miyu; Kato, Jun-ichi; Kawata, Satoshi

    2011-04-01

    The recently emerging three-dimensional (3D) displays in the electronic shops imitate depth illusion by overlapping two parallax 2D images through either polarized glasses that viewers are required to wear or lenticular lenses fixed directly on the display. Holography, on the other hand, provides real 3D imaging, although usually limiting colors to monochrome. The so-called rainbow holograms—mounted, for example, on credit cards—are also produced from parallax images that change color with viewing angle. We report on a holographic technique based on surface plasmons that can reconstruct true 3D color images, where the colors are reconstructed by satisfying resonance conditions of surface plasmon polaritons for individual wavelengths. Such real 3D color images can be viewed from any angle, just like the original object.

  13. Hartley 2 in 3-D

    NASA Image and Video Library

    2010-11-18

    This 3-D image shows the region where NASA Deep Impact mission sent a probe into the surface of comet Tempel 1 in 2005. This picture was taken six years after the Deep Impact collision. 3D glasses are necessary to view this image.

  14. Ultrahigh-definition dynamic 3D holographic display by active control of volume speckle fields

    NASA Astrophysics Data System (ADS)

    Yu, Hyeonseung; Lee, Kyeoreh; Park, Jongchan; Park, Yongkeun

    2017-01-01

    Holographic displays generate realistic 3D images that can be viewed without the need for any visual aids. They operate by generating carefully tailored light fields that replicate how humans see an actual environment. However, the realization of high-performance, dynamic 3D holographic displays has been hindered by the capabilities of present wavefront modulator technology. In particular, spatial light modulators have a small diffraction angle range and limited pixel number limiting the viewing angle and image size of a holographic 3D display. Here, we present an alternative method to generate dynamic 3D images by controlling volume speckle fields significantly enhancing image definition. We use this approach to demonstrate a dynamic display of micrometre-sized optical foci in a volume of 8 mm × 8 mm × 20 mm.

  15. Simulator sickness analysis of 3D video viewing on passive 3D TV

    NASA Astrophysics Data System (ADS)

    Brunnström, K.; Wang, K.; Andrén, B.

    2013-03-01

    The MPEG 3DV project is working on the next generation video encoding standard and in this process a call for proposal of encoding algorithms was issued. To evaluate these algorithm a large scale subjective test was performed involving Laboratories all over the world. For the participating Labs it was optional to administer a slightly modified Simulator Sickness Questionnaire (SSQ) from Kennedy et al (1993) before and after the test. Here we report the results from one Lab (Acreo) located in Sweden. The videos were shown on a 46 inch film pattern retarder 3D TV, where the viewers were using polarized passive eye-glasses to view the stereoscopic 3D video content. There were 68 viewers participating in this investigation in ages ranges from 16 to 72, with one third females. The questionnaire was filled in before and after the test, with a viewing time ranging between 30 min to about one and half hour, which is comparable to a feature length movie. The SSQ consists of 16 different symptoms that have been identified as important for indicating simulator sickness. When analyzing the individual symptoms it was found that Fatigue, Eye-strain, Difficulty Focusing and Difficulty Concentrating were significantly worse after than before. SSQ was also analyzed according to the model suggested by Kennedy et al (1993). All in all this investigation shows a statistically significant increase in symptoms after viewing 3D video especially related to visual or Oculomotor system.

  16. The effects of absence of stereopsis on performance of a simulated surgical task in two-dimensional and three-dimensional viewing conditions

    PubMed Central

    Bloch, Edward; Uddin, Nabil; Gannon, Laura; Rantell, Khadija; Jain, Saurabh

    2015-01-01

    Background Stereopsis is believed to be advantageous for surgical tasks that require precise hand-eye coordination. We investigated the effects of short-term and long-term absence of stereopsis on motor task performance in three-dimensional (3D) and two-dimensional (2D) viewing conditions. Methods 30 participants with normal stereopsis and 15 participants with absent stereopsis performed a simulated surgical task both in free space under direct vision (3D) and via a monitor (2D), with both eyes open and one eye covered in each condition. Results The stereo-normal group scored higher, on average, than the stereo-absent group with both eyes open under direct vision (p<0.001). Both groups performed comparably in monocular and binocular monitor viewing conditions (p=0.579). Conclusions High-grade stereopsis confers an advantage when performing a fine motor task under direct vision. However, stereopsis does not appear advantageous to task performance under 2D viewing conditions, such as in video-assisted surgery. PMID:25185439

  17. 3. INTERIOR VIEW, SHOWING JET ENGINE TEST STAND. WrightPatterson ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. INTERIOR VIEW, SHOWING JET ENGINE TEST STAND. - Wright-Patterson Air Force Base, Area B, Building 71A, Propulsion Research Laboratory, Seventh Street between D & G Streets, Dayton, Montgomery County, OH

  18. Study of blur discrimination for 3D stereo viewing

    NASA Astrophysics Data System (ADS)

    Subedar, Mahesh; Karam, Lina J.

    2014-03-01

    Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.

  19. Why laparoscopists may opt for three-dimensional view: a summary of the full HTA report on 3D versus 2D laparoscopy by S.I.C.E. (Società Italiana di Chirurgia Endoscopica e Nuove Tecnologie).

    PubMed

    Vettoretto, Nereo; Foglia, Emanuela; Ferrario, Lucrezia; Arezzo, Alberto; Cirocchi, Roberto; Cocorullo, Gianfranco; Currò, Giuseppe; Marchi, Domenico; Portale, Giuseppe; Gerardi, Chiara; Nocco, Umberto; Tringali, Michele; Anania, Gabriele; Piccoli, Micaela; Silecchia, Gianfranco; Morino, Mario; Valeri, Andrea; Lettieri, Emauele

    2018-06-01

    Three-dimensional view in laparoscopic general, gynaecologic and urologic surgery is an efficient, safe and sustainable innovation. The present paper is an extract taken from a full health technology assessment report on three-dimensional vision technology compared with standard two-dimensional laparoscopic systems. A health technology assessment approach was implemented in order to investigate all the economic, social, ethical and organisational implications related to the adoption of the innovative three-dimensional view. With the support of a multi-disciplinary team, composed of eight experts working in Italian hospitals and Universities, qualitative and quantitative data were collected, by means of literature evidence, validated questionnaire and self-reported interviews, applying a final MCDA quantitative approach, and considering the dimensions resulting from the EUnetHTA Core Model. From systematic search of literature, we retrieved the following studies: 9 on general surgery, 35 on gynaecology and urology, both concerning clinical setting. Considering simulated setting we included: 8 studies regarding pitfalls and drawbacks, 44 on teaching, 12 on surgeons' confidence and comfort and 34 on surgeons' performances. Three-dimensional laparoscopy was shown to have advantages for both the patients and the surgeons, and is confirmed to be a safe, efficacious and sustainable vision technology. The objective of the present paper, under the patronage of Italian Society of Endoscopic Surgery, was achieved in that there has now been produced a scientific report, based on a HTA approach, that may be placed in the hands of surgeons and used to support the decision-making process of the health providers.

  20. 3. PERSPECTIVE VIEW OF HOUSE FROM SOUTHEAST, PRIOR TO THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. PERSPECTIVE VIEW OF HOUSE FROM SOUTHEAST, PRIOR TO THE ALTERATIONS OF 1908, SHOWING ADDITION OF FULL LATTICE WORK SCREENING FOUNDATIONS - Ralph M. Munroe House, 3485 Main Highway, Coconut Grove, Miami, Miami-Dade County, FL

  1. Automatic depth grading tool to successfully adapt stereoscopic 3D content to digital cinema and home viewing environments

    NASA Astrophysics Data System (ADS)

    Thébault, Cédric; Doyen, Didier; Routhier, Pierre; Borel, Thierry

    2013-03-01

    To ensure an immersive, yet comfortable experience, significant work is required during post-production to adapt the stereoscopic 3D (S3D) content to the targeted display and its environment. On the one hand, the content needs to be reconverged using horizontal image translation (HIT) so as to harmonize the depth across the shots. On the other hand, to prevent edge violation, specific re-convergence is required and depending on the viewing conditions floating windows need to be positioned. In order to simplify this time-consuming work we propose a depth grading tool that automatically adapts S3D content to digital cinema or home viewing environments. Based on a disparity map, a stereo point of interest in each shot is automatically evaluated. This point of interest is used for depth matching, i.e. to position the objects of interest of consecutive shots in a same plane so as to reduce visual fatigue. The tool adapts the re-convergence to avoid edge-violation, hyper-convergence and hyper-divergence. Floating windows are also automatically positioned. The method has been tested on various types of S3D content, and the results have been validated by a stereographer.

  2. Automated 3D architecture reconstruction from photogrammetric structure-and-motion: A case study of the One Pilla pagoda, Hanoi, Vienam

    NASA Astrophysics Data System (ADS)

    To, T.; Nguyen, D.; Tran, G.

    2015-04-01

    Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.

  3. Distributed rendering for multiview parallax displays

    NASA Astrophysics Data System (ADS)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  4. Treatment envelope evaluation in transcranial magnetic resonance-guided focused ultrasound utilizing 3D MR thermometry

    PubMed Central

    2014-01-01

    Background Current clinical targets for transcranial magnetic resonance-guided focused ultrasound (tcMRgFUS) are all located close to the geometric center of the skull convexity, which minimizes challenges related to focusing the ultrasound through the skull bone. Non-central targets will have to be reached to treat a wider variety of neurological disorders and solid tumors. Treatment envelope studies utilizing two-dimensional (2D) magnetic resonance (MR) thermometry have previously been performed to determine the regions in which therapeutic levels of FUS can currently be delivered. Since 2D MR thermometry was used, very limited information about unintended heating in near-field tissue/bone interfaces could be deduced. Methods In this paper, we present a proof-of-concept treatment envelope study with three-dimensional (3D) MR thermometry monitoring of FUS heatings performed in a phantom and a lamb model. While the moderate-sized transducer used was not designed for transcranial geometries, the 3D temperature maps enable monitoring of the entire sonication field of view, including both the focal spot and near-field tissue/bone interfaces, for full characterization of all heating that may occur. 3D MR thermometry is achieved by a combination of k-space subsampling and a previously described temporally constrained reconstruction method. Results We present two different types of treatment envelopes. The first is based only on the focal spot heating—the type that can be derived from 2D MR thermometry. The second type is based on the relative near-field heating and is calculated as the ratio between the focal spot heating and the near-field heating. This utilizes the full 3D MR thermometry data achieved in this study. Conclusions It is shown that 3D MR thermometry can be used to improve the safety assessment in treatment envelope evaluations. Using a non-optimal transducer, it is shown that some regions where therapeutic levels of FUS can be delivered, as suggested by the first type of envelope, are not necessarily safely treated due to the amount of unintended near-field heating occurring. The results presented in this study highlight the need for 3D MR thermometry in tcMRgFUS. PMID:25343028

  5. 77 FR 7526 - Interpretation of Protection System Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ... reh'g & compliance, 117 FERC ] 61,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C... opportunity to view and/or print the contents of this document via the Internet through FERC's Home Page... available on eLibrary in PDF and Microsoft Word format for viewing, printing, and/or downloading. To access...

  6. 76 FR 58101 - Electric Reliability Organization Interpretation of Transmission Operations Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-20

    ... on reh'g & compliance, 117 FERC ] 61,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D... persons an opportunity to view and/or print the contents of this document via the Internet through FERC's... document is available on eLibrary in PDF and Microsoft Word format for viewing, printing, and/or...

  7. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  8. Magellan 3D perspective of Venus surface in western Eistla Regio

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Magellan synthetic aperture radar data was used to create this three- dimensional (3D) perspective view of Venus' western Eistla Regio. This viewpoint is located at 1,310 kilometers (812 miles) southwest of Gula Mons at an elevation of 0.178 kilometers (0.48 miles). The view is of the northeast with Gula Mons appearing on the horizon. Gula Mons, a 3 kilometer (1.86 mile) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude. The impact crater Cunitz, named for the astronomer and mathematician Maria Cunitz, is visible in the center of the image. The crater is 48.5 kilometers (30 miles) in diameter and is 215 kilometers (133 miles) from the viewer's position. Magellan synthetic aperture radar data is combined with radar altimetry to develop a 3D map of the surface. Rays cast in a computer intersect the surface to create a 3D view. Simulated color and a digital elevation map developed by the United States (U.S.) Geological Survey is used to enhanc

  9. America's National Parks 3d (3)

    Atmospheric Science Data Center

    2016-12-30

    article title:  America's National Parks Viewed in 3D by NASA's MISR (Anaglyph 3)   ... for larger version   Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle ...

  10. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  11. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  12. The « 3-D donut » electrostatic analyzer for millisecond timescale electron measurements in the solar wind

    NASA Astrophysics Data System (ADS)

    Berthomier, M.; Techer, J. D.

    2017-12-01

    Understanding electron acceleration mechanisms in planetary magnetospheres or energy dissipation at electron scale in the solar wind requires fast measurement of electron distribution functions on a millisecond time scale. Still, since the beginning of space age, the instantaneous field of view of plasma spectrometers is limited to a few degrees around their viewing plane. In Earth's magnetosphere, the NASA MMS spacecraft use 8 state-of-the-art sensor heads to reach a time resolution of 30 milliseconds. This costly strategy in terms of mass and power consumption can hardly be extended to the next generation of constellation missions that would use a large number of small-satellites. In the solar wind, using the same sensor heads, the ESA THOR mission is expected to reach the 5ms timescale in the thermal energy range, up to 100eV. We present the « 3-D donut » electrostatic analyzer concept that can change the game for future space missions because of its instantaneous hemispheric field of view. A set of 2 sensors is sufficient to cover all directions over a wide range of energy, e.g. up to 1-2keV in the solar wind, which covers both thermal and supra-thermal particles. In addition, its high sensitivity compared to state of the art instruments opens the possibility of millisecond time scale measurements in space plasmas. With CNES support, we developed a high fidelity prototype (a quarter of the full « 3-D donut » analyzer) that includes all electronic sub-systems. The prototype weights less than a kilogram. The key building block of the instrument is an imaging detector that uses EASIC, a low-power front-end electronics that will fly on the ESA Solar Orbiter and on the NASA Parker Solar Probe missions.

  13. A microscale three-dimensional urban energy balance model for studying surface temperatures

    NASA Astrophysics Data System (ADS)

    Krayenhoff, E. Scott; Voogt, James A.

    2007-06-01

    A microscale three-dimensional (3-D) urban energy balance model, Temperatures of Urban Facets in 3-D (TUF-3D), is developed to predict urban surface temperatures for a variety of surface geometries and properties, weather conditions, and solar angles. The surface is composed of plane-parallel facets: roofs, walls, and streets, which are further sub-divided into identical square patches, resulting in a 3-D raster-type model geometry. The model code is structured into radiation, conduction and convection sub-models. The radiation sub-model uses the radiosity approach and accounts for multiple reflections and shading of direct solar radiation. Conduction is solved by finite differencing of the heat conduction equation, and convection is modelled by empirically relating patch heat transfer coefficients to the momentum forcing and the building morphology. The radiation and conduction sub-models are tested individually against measurements, and the complete model is tested against full-scale urban surface temperature and energy balance observations. Modelled surface temperatures perform well at both the facet-average and the sub-facet scales given the precision of the observations and the uncertainties in the model inputs. The model has several potential applications, such as the calculation of radiative loads, and the investigation of effective thermal anisotropy (when combined with a sensor-view model).

  14. 3. SOUTH SIDE OF BUILDING 724. VIEW TO NORTH. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. SOUTH SIDE OF BUILDING 724. VIEW TO NORTH. - Rocky Mountain Arsenal, Pesticide Incinerator-Precipitator, 260 feet South of December Seventh Avenue; 1840 feet East of D Street, Commerce City, Adams County, CO

  15. 3. BUILDING 321. VIEW TO SOUTHEAST. Rocky Mountain Arsenal, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. BUILDING 321. VIEW TO SOUTHEAST. - Rocky Mountain Arsenal, Boiler Plant-Central Gas Heat Plant, 1022 feet South of December Seventh Avenue; 525 feet West of D Street, Commerce City, Adams County, CO

  16. 3. INTERIOR OF BUILDING 313, SHOWING LABORATORY. VIEW TO SOUTHEAST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. INTERIOR OF BUILDING 313, SHOWING LABORATORY. VIEW TO SOUTHEAST. - Rocky Mountain Arsenal, Laboratory Building, 510 feet South of December Seventh Avenue; 175 feet East of D Street, Commerce City, Adams County, CO

  17. 3. FIRSTFLOOR LABORATORY. VIEW TO SOUTHWEST. Rocky Mountain Arsenal, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. FIRST-FLOOR LABORATORY. VIEW TO SOUTHWEST. - Rocky Mountain Arsenal, Administration-Laboratory- Change House-Bomb Rail, 420 feet South of December Seventh Avenue; 530 feet West of D Street, Commerce City, Adams County, CO

  18. 3. GROUND VIEW OF EXTERIOR STAIRWAY ENTRANCE FACING SOUTHEAST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. GROUND VIEW OF EXTERIOR STAIRWAY ENTRANCE FACING SOUTHEAST. - U.S. Naval Base, Pearl Harbor, Signal Tower, Corner of Seventh Street & Avenue D east of Drydock No. 1, Pearl City, Honolulu County, HI

  19. Reduced-thickness backlighter for autostereoscopic display and display using the backlighter

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Jesse B (Inventor); Gruhlke, Russell W (Inventor)

    1999-01-01

    A reduced-thickness backlighter for an autostereoscopic display is disclosed having a lightguide and at least one light source parallel to an edge of the lightguide so as to be substantially coplanar with the lightguide. The lightguide is provided with a first surface which has a plurality of reflective linear regions, such as elongated grooves or glossy lines, parallel to the illuminated edge of the lightguide. Preferably the lightguide further has a second surface which has a plurality of lenticular lenses for reimaging the reflected light from the linear regions into a series of thin vertical lines outside the guide. Because of the reduced thickness of the backlighter system, autostereoscopic viewing is enabled in applications requiring thin backlighter systems. In addition to taking up less space, the reduced-thickness backlighter uses less lamps and less power. For accommodating 2-D applications, a 2-D diffuser plate or a 2-D lightguide parallel to the 3-D backlighter is disclosed for switching back and forth between 3-D viewing and 2-D viewing.

  20. Evolution of stereoscopic imaging in surgery and recent advances

    PubMed Central

    Schwab, Katie; Smith, Ralph; Brown, Vanessa; Whyte, Martin; Jourdan, Iain

    2017-01-01

    In the late 1980s the first laparoscopic cholecystectomies were performed prompting a sudden rise in technological innovations as the benefits and feasibility of minimal access surgery became recognised. Monocular laparoscopes provided only two-dimensional (2D) viewing with reduced depth perception and contributed to an extended learning curve. Attention turned to producing a usable three-dimensional (3D) endoscopic view for surgeons; utilising different technologies for image capture and image projection. These evolving visual systems have been assessed in various research environments with conflicting outcomes of success and usability, and no overall consensus to their benefit. This review article aims to provide an explanation of the different types of technologies, summarise the published literature evaluating 3D vs 2D laparoscopy, to explain the conflicting outcomes, and discuss the current consensus view. PMID:28874957

  1. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  2. Development of a volumetric projection technique for the digital evaluation of field of view.

    PubMed

    Marshall, Russell; Summerskill, Stephen; Cook, Sharon

    2013-01-01

    Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.

  3. Patient-specific quality assurance for the delivery of (60)Co intensity modulated radiation therapy subject to a 0.35-T lateral magnetic field.

    PubMed

    Li, H Harold; Rodriguez, Vivian L; Green, Olga L; Hu, Yanle; Kashani, Rojano; Wooten, H Omar; Yang, Deshan; Mutic, Sasa

    2015-01-01

    This work describes a patient-specific dosimetry quality assurance (QA) program for intensity modulated radiation therapy (IMRT) using ViewRay, the first commercial magnetic resonance imaging-guided RT device. The program consisted of: (1) a 1-dimensional multipoint ionization chamber measurement using a customized 15-cm(3) cube-shaped phantom; (2) 2-dimensional (2D) radiographic film measurement using a 30- × 30- × 20-cm(3) phantom with multiple inserted ionization chambers; (3) quasi-3D diode array (ArcCHECK) measurement with a centrally inserted ionization chamber; (4) 2D fluence verification using machine delivery log files; and (5) 3D Monte Carlo (MC) dose reconstruction with machine delivery files and phantom CT. Ionization chamber measurements agreed well with treatment planning system (TPS)-computed doses in all phantom geometries where the mean ± SD difference was 0.0% ± 1.3% (n=102; range, -3.0%-2.9%). Film measurements also showed excellent agreement with the TPS-computed 2D dose distributions where the mean passing rate using 3% relative/3 mm gamma criteria was 94.6% ± 3.4% (n=30; range, 87.4%-100%). For ArcCHECK measurements, the mean ± SD passing rate using 3% relative/3 mm gamma criteria was 98.9% ± 1.1% (n=34; range, 95.8%-100%). 2D fluence maps with a resolution of 1 × 1 mm(2) showed 100% passing rates for all plan deliveries (n=34). The MC reconstructed doses to the phantom agreed well with planned 3D doses where the mean passing rate using 3% absolute/3 mm gamma criteria was 99.0% ± 1.0% (n=18; range, 97.0%-100%), demonstrating the feasibility of evaluating the QA results in the patient geometry. We developed a dosimetry program for ViewRay's patient-specific IMRT QA. The methodology will be useful for other ViewRay users. The QA results presented here can assist the RT community to establish appropriate tolerance and action limits for ViewRay's IMRT QA. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Multiview face detection based on position estimation over multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  5. Limited angle C-arm tomosynthesis reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  6. 3D isotropic T2-weighted fast spin echo (VISTA) versus 2D T2-weighted fast spin echo in evaluation of the calcaneofibular ligament in the oblique coronal plane.

    PubMed

    Park, H J; Lee, S Y; Choi, Y J; Hong, H P; Park, S J; Park, J H; Kim, E

    2017-02-01

    To investigate whether the image quality of three-dimensional (3D) volume isotropic fast spin echo acquisition (VISTA) magnetic resonance imaging (MRI) of the calcaneofibular ligament (CFL) view is comparable to that of 2D fast spin echo T2-weighted images (2D T2 FSE) for the evaluation of the CFL, and whether 3D VISTA can replace 2D T2 FSE for the evaluation of CFL injuries. This retrospective study included 76 patients who underwent ankle MRI with CFL views of both 2D T2 FSE MRI and 3D VISTA. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of both techniques were measured. The anatomical identification score and diagnostic performances were evaluated by two readers independently. The diagnostic performances of 3D VISTA and 2D T2 FSE were analysed by sensitivity, specificity, and accuracy for diagnosing CFL injury with reference standards of surgically or clinically confirmed diagnoses. Surgical correlation was performed in 29% of the patients, and clinical examination was used in those who did not have surgery (71%). The SNRs and CNRs of 3D VISTA were significantly higher than those of 2D T2 FSE. The anatomical identification scores on 3D VISTA were inferior to those on 2D T2 FSE, and the differences were statistically significant (p<0.05). There were no significant differences in diagnostic performance between the two sequences when diagnoses were classified as normal or abnormal. Although the image quality of 3D VISTA MRI of the CFL view is not equal to that of 2D T2 FSE for the anatomical evaluation of CFL, 3D VISTA has a diagnostic performance comparable to that of 2D T2 FSE for the diagnosis of CFL injuries. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. VR versus LF: towards the limitation-free 3D

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Kara, Peter A.

    2017-06-01

    The evolution of 3D technologies shows a cyclical learning curve with a series of hypes and dead ends, with mistakes and consequences. 3D images contain significantly more information than the corresponding 2D ones. 3D display systems should be built on more pixels, or higher speed components. For true 3D, this factor is in the order of 100x, which is a real technological challenge. If not fulfilled, the capabilities of 3D systems will be compromised: headgears will be needed, or the viewers should be positioned or tracked, single-user devices, lack of parallax, missing cues, etc. The temptation is always there: why to provide all the information, just what the person absorbs that moment (subjective or objective visualization). Virtual Reality (VR) glasses have been around for more than two decades. With the latest technical improvements, VR became the next hype. 3D immersion was added as a new phenomenon; however, VR represents an isolated experience, and still requires headgears and a controlled environment. Augmented Reality (AR) in this sense is different. Will the VR/AR hype with the headgears be a dead end? While VR headsets may sell better than smart glasses or 3D TV glasses, also consider that using the technology may require a set of behavioral changes that the majority of people do not want to make. Displays and technologies that restrict viewers, or cause any discomfort will not be accepted on the long term. The newer wave of 3D is forecasted to 2018-2020, answering the need for unaided, limitation-free 3D experience. Light Field (LF) systems represent the next-generation in 3D. The HoloVizio system, having a capacity in the order of 100x, offers natural, restrictions-free 3D experience on a full field of view, enabling collaborative use for an unlimited number of viewers, even in a wider, immersive space. As a scalable technology, the display range goes from monitor-style units, through automotive 3D HUDs, screen-less solutions, up to cinema systems, and Holografika is working on interactive large-scale immersive systems and glasses-free 3D LED walls.

  8. Depth-tunable three-dimensional display with interactive light field control

    NASA Astrophysics Data System (ADS)

    Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.

  9. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  10. Stereo Pair, Salt Lake City, Utah

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The 2002 Winter Olympics are hosted by Salt Lake City at several venues within the city, in nearby cities, and within the adjacent Wasatch Mountains. This image pair provides a stereoscopic map view of north central Utah that includes all of these Olympic sites. In the south, next to Utah Lake, Provo hosts the ice hockey competition. In the north, northeast of the Great Salt Lake, Ogden hosts curling and the nearby Snowbasin ski area hosts the downhill events. In between, southeast of the Great Salt Lake, Salt Lake City hosts the Olympic Village and the various skating events. Further east, across the Wasatch Mountains, the Park City ski resort hosts the bobsled, ski jumping, and snowboarding events. The Winter Olympics are always hosted in mountainous terrain. This view shows the dramatic landscape that makes the Salt Lake City region a world-class center for winter sports.

    This stereoscopic image was generated by draping a Landsat satellite image over a Shuttle Radar Topography Mission digital elevation model. Two differing perspectives were then calculated, one for each eye. They can be seen in 3-D by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing or by downloading and printing the image pair and viewing them with a stereoscope. When stereoscopically merged, the result is a vertically exaggerated view of Earth's surface in its full three dimensions.

    Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter(approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 222 x 93.8 kilometers (138 x 58.2 miles) Location: 40.0 to 42.0 deg. North lat., 111.25 to 112.25.0 deg. West lon.(exactly) Orientation: North at top Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters (98 feet) Date Acquired: February 2000 (SRTM), 1990s (Landsat 5 image mosaic)

  11. Aberration improvement of the floating 3D display system based on Tessar array and directional diffuser screen

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Zhang, Wanlu; Yan, Binbin; Yu, Chongxiu

    2018-06-01

    The floating 3D display system based on Tessar array and directional diffuser screen is proposed. The directional diffuser screen can smoothen the gap of lens array and make the 3D image's brightness continuous. The optical structure and aberration characteristics of the floating three-dimensional (3D) display system are analyzed. The simulation and experiment are carried out, which show that the 3D image quality becomes more and more deteriorative with the further distance of the image plane and the increasing viewing angle. To suppress the aberrations, the Tessar array is proposed according to the aberration characteristics of the floating 3D display system. A 3840 × 2160 liquid crystal display panel (LCD) with the size of 23.6 inches, a directional diffuser screen and a Tessar array are used to display the final 3D images. The aberrations are reduced and the definition is improved compared with that of the display with a single-lens array. The display depth of more than 20 cm and the viewing angle of more than 45° can be achieved.

  12. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    NASA Astrophysics Data System (ADS)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  13. The Sun in STEREO

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!

  14. America National Parks Viewed in 3D by NASA MISR Anaglyph 4

    NASA Image and Video Library

    2016-08-25

    Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite is releasing four new anaglyphs that showcase 33 of our nation's national parks, monuments, historical sites and recreation areas in glorious 3D. Shown in the annotated image are Sequoia National Park, Kings Canyon National Park, Manzanar National Historic Site, Devils Postpile National Monument, Yosemite National Park, and parts of Death Valley National Park. MISR views Earth with nine cameras pointed at different angles, giving it the unique capability to produce anaglyphs, stereoscopic images that allow the viewer to experience the landscape in three dimensions. The anaglyphs were made by combining data from MISR's vertical-viewing and 46-degree forward-pointing camera. You will need red-blue glasses in order to experience the 3D effect; ensure you place the red lens over your left eye. The images have been rotated so that north is to the left in order to enable 3D viewing because the Terra satellite flies from north to south. All of the images are 235 miles (378 kilometers) from west to east. These data were acquired July 7, 2016, Orbit 88051. http://photojournal.jpl.nasa.gov/catalog/PIA20892

  15. An in vitro comparison of subjective image quality of panoramic views acquired via 2D or 3D imaging.

    PubMed

    Pittayapat, P; Galiti, D; Huang, Y; Dreesen, K; Schreurs, M; Souza, P Couto; Rubira-Bullen, I R F; Westphalen, F H; Pauwels, R; Kalema, G; Willems, G; Jacobs, R

    2013-01-01

    The objective of this study is to compare subjective image quality and diagnostic validity of cone-beam CT (CBCT) panoramic reformatting with digital panoramic radiographs. Four dry human skulls and two formalin-fixed human heads were scanned using nine different CBCTs, one multi-slice CT (MSCT) and one standard digital panoramic device. Panoramic views were generated from CBCTs in four slice thicknesses. Seven observers scored image quality and visibility of 14 anatomical structures. Four observers repeated the observation after 4 weeks. Digital panoramic radiographs showed significantly better visualization of anatomical structures except for the condyle. Statistical analysis of image quality showed that the 3D imaging modalities (CBCTs and MSCT) were 7.3 times more likely to receive poor scores than the 2D modality. Yet, image quality from NewTom VGi® and 3D Accuitomo 170® was almost equivalent to that of digital panoramic radiographs with respective odds ratio estimates of 1.2 and 1.6 at 95% Wald confidence limits. A substantial overall agreement amongst observers was found. Intra-observer agreement was moderate to substantial. While 2D-panoramic images are significantly better for subjective diagnosis, 2/3 of the 3D-reformatted panoramic images are moderate or good for diagnostic purposes. Panoramic reformattings from particular CBCTs are comparable to digital panoramic images concerning the overall image quality and visualization of anatomical structures. This clinically implies that a 3D-derived panoramic view can be generated for diagnosis with a recommended 20-mm slice thickness, if CBCT data is a priori available for other purposes.

  16. Finding lesion correspondences in different views of automated 3D breast ultrasound

    NASA Astrophysics Data System (ADS)

    Tan, Tao; Platel, Bram; Hicks, Michael; Mann, Ritse M.; Karssemeijer, Nico

    2013-02-01

    Screening with automated 3D breast ultrasound (ABUS) is gaining popularity. However, the acquisition of multiple views required to cover an entire breast makes radiologic reading time-consuming. Linking lesions across views can facilitate the reading process. In this paper, we propose a method to automatically predict the position of a lesion in the target ABUS views, given the location of the lesion in a source ABUS view. We combine features describing the lesion location with respect to the nipple, the transducer and the chestwall, with features describing lesion properties such as intensity, spiculation, blobness, contrast and lesion likelihood. By using a grid search strategy, the location of the lesion was predicted in the target view. Our method achieved an error of 15.64 mm+/-16.13 mm. The error is small enough to help locate the lesion with minor additional interaction.

  17. A quantum chemical calculation of the potential energy surface in the formation of HOSO 2 from OH + SO 2

    NASA Astrophysics Data System (ADS)

    Sitha, Sanyasi; Jewell, Linda L.; Piketh, Stuart J.; Fourie, Gerhard

    2011-01-01

    The formation of HOSO 2 from OH and SO 2 has been thoroughly investigated using several different methods (MP2=Full, MP2=FC, B3LYP, HF and composite G∗ methods) and basis sets (6-31G(d,p), 6-31++G(d,p), 6-31++G(2d,2p), 6-31++G(2df,2p) and aug-cc-pVnZ). We have found two different possible transition state structures, one of which is a true transition state since it has a higher energy than the reactants and products (MP2=Full, MP2=FC and HF), while the other is not a true transition state since it has an energy which lies between that of the reactants and products (B3LYP and B3LYP based methods). The transition state structure (from MP2) has a twist angle of the OH fragment relative to the SO bond of the SO 2 fragment of -50.0°, whereas this angle is 26.7° in the product molecule. Examination of the displacement vectors confirms that this is a true transition state structure. The MP2=Full method with a larger basis set (MP2=Full/6-31++G(2df,2p)) predicts the enthalpy of reaction to be -112.8 kJ mol -1 which is close to the experimental value of -113.3 ± 6 kJ mol -1, and predicts a rather high barrier of 20.0 kJ mol -1. When the TS structure obtained by the MP2 method is used as the input for calculating the energetics using the QCISD/6-31++G(2df,2p) method, a barrier of 4.1 kJ mol -1 is obtained (ZPE corrected). The rate constant calculated from this barrier is 1.3 × 10 -13 cm 3 molecule -1 s -1. We conclude that while the MP2 methods correctly predict the TS from a structural point of view, higher level energy corrections are needed for estimation of exact barrier height.

  18. 3. SOUTH FLAME DEFLECTOR FROM THE REINFORCED CONCRETE ROOF, VIEW ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. SOUTH FLAME DEFLECTOR FROM THE REINFORCED CONCRETE ROOF, VIEW TOWARDS EAST. - Glenn L. Martin Company, Titan Missile Test Facilities, Captive Test Stand D-2, Waterton Canyon Road & Colorado Highway 121, Lakewood, Jefferson County, CO

  19. 3. BUILDING 741/742. VIEW TO WEST. Rocky Mountain Arsenal, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. BUILDING 741/742. VIEW TO WEST. - Rocky Mountain Arsenal, Refrigeration Napalm & Incendiary Bomb Warehouse-Bomb Filling, 825 feet South of December Seventh Avenue; 2425 feet East of D Street, Commerce City, Adams County, CO

  20. The impact of stereo 3D sports TV broadcasts on user's depth perception and spatial presence experience

    NASA Astrophysics Data System (ADS)

    Weigelt, K.; Wiemeyer, J.

    2014-03-01

    This work examines the impact of content and presentation parameters in 2D versus 3D on depth perception and spatial presence, and provides guidelines for stereoscopic content development for 3D sports TV broadcasts and cognate subjects. Under consideration of depth perception and spatial presence experience, a preliminary study with 8 participants (sports: soccer and boxing) and a main study with 31 participants (sports: soccer and BMX-Miniramp) were performed. The dimension (2D vs. 3D) and camera position (near vs. far) were manipulated for soccer and boxing. In addition for soccer, the field of view (small vs. large) was examined. Moreover, the direction of motion (horizontal vs. depth) was considered for BMX-Miniramp. Subjective assessments, behavioural tests and qualitative interviews were implemented. The results confirm a strong effect of 3D on both depth perception and spatial presence experience as well as selective influences of camera distance and field of view. The results can improve understanding of the perception and experience of 3D TV as a medium. Finally, recommendations are derived on how to use various 3D sports ideally as content for TV broadcasts.

  1. A generalized measurement equation and van Cittert-Zernike theorem for wide-field radio astronomical interferometry

    NASA Astrophysics Data System (ADS)

    Carozzi, T. D.; Woan, G.

    2009-05-01

    We derive a generalized van Cittert-Zernike (vC-Z) theorem for radio astronomy that is valid for partially polarized sources over an arbitrarily wide field of view (FoV). The classical vC-Z theorem is the theoretical foundation of radio astronomical interferometry, and its application is the basis of interferometric imaging. Existing generalized vC-Z theorems in radio astronomy assume, however, either paraxiality (narrow FoV) or scalar (unpolarized) sources. Our theorem uses neither of these assumptions, which are seldom fulfiled in practice in radio astronomy, and treats the full electromagnetic field. To handle wide, partially polarized fields, we extend the two-dimensional (2D) electric field (Jones vector) formalism of the standard `Measurement Equation' (ME) of radio astronomical interferometry to the full three-dimensional (3D) formalism developed in optical coherence theory. The resulting vC-Z theorem enables full-sky imaging in a single telescope pointing, and imaging based not only on standard dual-polarized interferometers (that measure 2D electric fields) but also electric tripoles and electromagnetic vector-sensor interferometers. We show that the standard 2D ME is easily obtained from our formalism in the case of dual-polarized antenna element interferometers. We also exploit an extended 2D ME to determine that dual-polarized interferometers can have polarimetric aberrations at the edges of a wide FoV. Our vC-Z theorem is particularly relevant to proposed, and recently developed, wide FoV interferometers such as Low Frequency Array (LOFAR) and Square Kilometer Array (SKA), for which direction-dependent effects will be important.

  2. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  3. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.

  4. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  5. Variable disparity-motion estimation based fast three-view video coding

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  6. Simple measurement of lenticular lens quality for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Gray, Stuart; Boudreau, Robert A.

    2013-03-01

    Lenticular lens based autostereoscopic 3D displays are finding many applications in digital signage and consumer electronics devices. A high quality 3D viewing experience requires the lenticular lens be properly aligned with the pixels on the display device so that each eye views the correct image. This work presents a simple and novel method for rapidly assessing the quality of a lenticular lens to be used in autostereoscopic displays. Errors in lenticular alignment across the entire display are easily observed with a simple test pattern where adjacent views are programmed to display different colors.

  7. Wide-angle vision for road views

    NASA Astrophysics Data System (ADS)

    Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.

    2013-03-01

    The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.

  8. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  9. All-optical endoscopic probe for high resolution 3D photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Zhang, E.; Desjardins, A. E.; Beard, P. C.

    2017-03-01

    A novel all-optical forward-viewing photoacoustic probe using a flexible coherent fibre-optic bundle and a Fabry- Perot (FP) ultrasound sensor has been developed. The fibre bundle, along with the FP sensor at its distal end, synthesizes a high density 2D array of wideband ultrasound detectors. Photoacoustic waves arriving at the sensor are spatially mapped by optically scanning the proximal end face of the bundle in 2D with a CW wavelength-tunable interrogation laser. 3D images are formed from the detected signals using a time-reversal image reconstruction algorithm. The system has been characterized in terms of its PSF, noise-equivalent pressure and field of view. Finally, the high resolution 3D imaging capability has been demonstrated using arbitrary shaped phantoms and duck embryo.

  10. Perceived Advantages of 3D Lessons in Constructive Learning for South African Student Teachers Encountering Learning Barriers

    ERIC Educational Resources Information Center

    de Jager, Thelma

    2017-01-01

    Research shows that three-dimensional (3D)-animated lessons can contribute to student teachers' effective learning and comprehension, regardless of the learning barriers they experience. Student teachers majoring in the subject Life Sciences in General Subject Didactics viewed 3D images of the heart during lectures. The 3D images employed in the…

  11. A high resolution and high speed 3D imaging system and its application on ATR

    NASA Astrophysics Data System (ADS)

    Lu, Thomas T.; Chao, Tien-Hsin

    2006-04-01

    The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.

  12. Viewing CAD Drawings on the Internet

    ERIC Educational Resources Information Center

    Schwendau, Mark

    2004-01-01

    Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…

  13. Noninvasive computerized scanning method for the correlation between the facial soft and hard tissues for an integrated three-dimensional anthropometry and cephalometry.

    PubMed

    Galantucci, Luigi Maria; Percoco, Gianluca; Lavecchia, Fulvio; Di Gioia, Eliana

    2013-05-01

    The article describes a new methodology to scan and integrate facial soft tissue surface with dental hard tissue models in a three-dimensional (3D) virtual environment, for a novel diagnostic approach.The facial and the dental scans can be acquired using any optical scanning systems: the models are then aligned and integrated to obtain a full virtual navigable representation of the head of the patient. In this article, we report in detail and further implemented a method for integrating 3D digital cast models into a 3D facial image, to visualize the anatomic position of the dentition. This system uses several 3D technologies to scan and digitize, integrating them with traditional dentistry records. The acquisitions were mainly performed using photogrammetric scanners, suitable for clinics or hospitals, able to obtain high mesh resolution and optimal surface texture for the photorealistic rendering of the face. To increase the quality and the resolution of the photogrammetric scanning of the dental elements, the authors propose a new technique to enhance the texture of the dental surface. Three examples of the application of the proposed procedure are reported in this article, using first laser scanning and photogrammetry and then only photogrammetry. Using cheek retractors, it is possible to scan directly a great number of dental elements. The final results are good navigable 3D models that integrate facial soft tissue and dental hard tissues. The method is characterized by the complete absence of ionizing radiation, portability and simplicity, fast acquisition, easy alignment of the 3D models, and wide angle of view of the scanner. This method is completely noninvasive and can be repeated any time the physician needs new clinical records. The 3D virtual model is a precise representation both of the soft and the hard tissue scanned, and it is possible to make any dimensional measure directly in the virtual space, for a full integrated 3D anthropometry and cephalometry. Moreover, the authors propose a method completely based on close-range photogrammetric scanning, able to detect facial and dental surfaces, and reducing the time, the complexity, and the cost of the scanning operations and the numerical elaboration.

  14. Digital hologram transformations for RGB color holographic display with independent image magnification and translation in 3D.

    PubMed

    Makowski, Piotr L; Zaperty, Weronika; Kozacki, Tomasz

    2018-01-01

    A new framework for in-plane transformations of digital holograms (DHs) is proposed, which provides improved control over basic geometrical features of holographic images reconstructed optically in full color. The method is based on a Fourier hologram equivalent of the adaptive affine transformation technique [Opt. Express18, 8806 (2010)OPEXFF1094-408710.1364/OE.18.008806]. The solution includes four elementary geometrical transformations that can be performed independently on a full-color 3D image reconstructed from an RGB hologram: (i) transverse magnification; (ii) axial translation with minimized distortion; (iii) transverse translation; and (iv) viewing angle rotation. The independent character of transformations (i) and (ii) constitutes the main result of the work and plays a double role: (1) it simplifies synchronization of color components of the RGB image in the presence of mismatch between capture and display parameters; (2) provides improved control over position and size of the projected image, particularly the axial position, which opens new possibilities for efficient animation of holographic content. The approximate character of the operations (i) and (ii) is examined both analytically and experimentally using an RGB circular holographic display system. Additionally, a complex animation built from a single wide-aperture RGB Fourier hologram is presented to demonstrate full capabilities of the developed toolset.

  15. High resolution, wide field of view, real time 340GHz 3D imaging radar for security screening

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Hunter, Robert I.; Cassidy, Scott L.; Llombart, Nuria; Gandini, Erio; Bryllert, Tomas; Ferndahl, Mattias; Lindström, Hannu; Tenhunen, Jussi; Vasama, Hannu; Huopana, Jouni; Selkälä, Timo; Vuotikka, Antti-Jussi

    2017-05-01

    The EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) is developing a demonstrator system for next generation airport security screening which will combine passive and active submillimeter wave imaging sensors. We report on the development of the 340 GHz 3D imaging radar which achieves high volumetric resolution over a wide field of view with high dynamic range and a high frame rate. A sparse array of 16 radar transceivers is coupled with high speed mechanical beam scanning to achieve a field of view of 1 x 1 x 1 m3 and a 10 Hz frame rate.

  16. The impact of social media promotion with infographics and podcasts on research dissemination and readership.

    PubMed

    Thoma, Brent; Murray, Heather; Huang, Simon York Ming; Milne, William Ken; Martin, Lynsey J; Bond, Christopher M; Mohindra, Rohit; Chin, Alvin; Yeh, Calvin H; Sanderson, William B; Chan, Teresa M

    2018-03-01

    In 2015 and 2016, the Canadian Journal of Emergency Medicine (CJEM) Social Media (SoMe) Team collaborated with established medical websites to promote CJEM articles using podcasts and infographics while tracking dissemination and readership. CJEM publications in the "Original Research" and "State of the Art" sections were selected by the SoMe Team for podcast and infographic promotion based on their perceived interest to emergency physicians. A control group was composed retrospectively of articles from the 2015 and 2016 issues with the highest Altmetric score that received standard Facebook and Twitter promotions. Studies on SoMe topics were excluded. Dissemination was quantified by January 1, 2017 Altmetric scores. Readership was measured by abstract and full-text views over a 3-month period. The number needed to view (NNV) was calculated by dividing abstract views by full-text views. Twenty-nine of 88 articles that met inclusion were included in the podcast (6), infographic (11), and control (12) groups. Descriptive statistics (mean, 95% confidence interval) were calculated for podcast (Altmetric: 61, 42-80; Abstract: 1795, 1135-2455; Full-text: 431, 0-1031), infographic (Altmetric: 31.5, 19-43; Abstract: 590, 361-819; Full-text: 65, 33-98), and control (Altmetric: 12, 8-15; Abstract: 257, 159-354; Full-Text: 73, 38-109) articles. The NNV was 4.2 for podcast, 9.0 for infographic, and 3.5 for control articles. Discussion Limitations included selection bias, the influence of SoMe promotion on the Altmetric scores, and a lack of generalizability to other journals. Collaboration with established SoMe websites using podcasts and infographics was associated with increased Altmetric scores and abstract views but not full-text article views.

  17. A dual-view digital tomosynthesis imaging technique for improved chest imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng

    Purpose: Digital tomosynthesis (DTS) has been shown to be useful for reducing the overlapping of abnormalities with anatomical structures at various depth levels along the posterior–anterior (PA) direction in chest radiography. However, DTS provides crude three-dimensional (3D) images that have poor resolution in the lateral view and can only be displayed with reasonable quality in the PA view. Furthermore, the spillover of high-contrast objects from off-fulcrum planes generates artifacts that may impede the diagnostic use of the DTS images. In this paper, the authors describe and demonstrate the use of a dual-view DTS technique to improve the accuracy of themore » reconstructed volume image data for more accurate rendition of the anatomy and slice images with improved resolution and reduced artifacts, thus allowing the 3D image data to be viewed in views other than the PA view. Methods: With the dual-view DTS technique, limited angle scans are performed and projection images are acquired in two orthogonal views: PA and lateral. The dual-view projection data are used together to reconstruct 3D images using the maximum likelihood expectation maximization iterative algorithm. In this study, projection images were simulated or experimentally acquired over 360° using the scanning geometry for cone beam computed tomography (CBCT). While all projections were used to reconstruct CBCT images, selected projections were extracted and used to reconstruct single- and dual-view DTS images for comparison with the CBCT images. For realistic demonstration and comparison, a digital chest phantom derived from clinical CT images was used for the simulation study. An anthropomorphic chest phantom was imaged for the experimental study. The resultant dual-view DTS images were visually compared with the single-view DTS images and CBCT images for the presence of image artifacts and accuracy of CT numbers and anatomy and quantitatively compared with root-mean-square-deviation (RMSD) values computed using the digital chest phantom or the CBCT images as the reference in the simulation and experimental study, respectively. High-contrast wires with vertical, oblique, and horizontal orientations in a PA view plane were also imaged to investigate the spatial resolutions and how the wire signals spread in the PA view and lateral view slice images. Results: Both the digital phantom images (simulated) and the anthropomorphic phantom images (experimentally generated) demonstrated that the dual-view DTS technique resulted in improved spatial resolution in the depth (PA) direction, more accurate representation of the anatomy, and significantly reduced artifacts. The RMSD values corroborate well with visual observations with substantially lower RMSD values measured for the dual-view DTS images as compared to those measured for the single-view DTS images. The imaging experiment with the high-contrast wires shows that while the vertical and oblique wires could be resolved in the lateral view in both single- and dual-view DTS images, the horizontal wire could only be resolved in the dual-view DTS images. This indicates that with single-view DTS, the wire signals spread liberally to off-fulcrum planes and generated wire shadow there. Conclusions: The authors have demonstrated both visually and quantitatively that the dual-view DTS technique can be used to achieve more accurate rendition of the anatomy and to obtain slice images with improved resolution and reduced artifacts as compared to the single-view DTS technique, thus allowing the 3D image data to be viewed in views other than the PA view. These advantages could make the dual-view DTS technique useful in situations where better separation of the objects-of-interest from the off-fulcrum structures or more accurate 3D rendition of the anatomy are required while a regular CT examination is undesirable due to radiation dose considerations.« less

  18. GENERAL VIEW OF VEHICLE ACCESS PLATFORM DNORTH, HB3, FACING NORTHWEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    GENERAL VIEW OF VEHICLE ACCESS PLATFORM D-NORTH, HB-3, FACING NORTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  19. GENERAL VIEW OF VEHICLE ACCESS PLATFORM DNORTH, HB3, FACING NORTH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    GENERAL VIEW OF VEHICLE ACCESS PLATFORM D-NORTH, HB-3, FACING NORTH - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  20. 3. SOUTH AND EAST SIDES OF BUILDING 328. VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. SOUTH AND EAST SIDES OF BUILDING 328. VIEW TO NORTHWEST. - Rocky Mountain Arsenal, Goop Mixing & Filling Building, 1480 feet South of December Seventh Avenue; 900 feet West of D Street, Commerce City, Adams County, CO

  1. 3. WEST AND NORTH SIDES OF BUILDING 731/732. VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. WEST AND NORTH SIDES OF BUILDING 731/732. VIEW TO SOUTHEAST. - Rocky Mountain Arsenal, Army Reserve Center, 510 feet South of December Seventh Avenue; 2400 feet East of D Street, Commerce City, Adams County, CO

  2. 3. NORTH AND EAST SIDES OF BUILDING 515. VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. NORTH AND EAST SIDES OF BUILDING 515. VIEW TO SOUTHWEST. - Rocky Mountain Arsenal, Crude Mustard Distillation Building, 550 feet South of December Seventh Avenue; 400 feet East of D Street, Commerce City, Adams County, CO

  3. 3. WEST AND SOUTH SIDES OF BUILDING 251. VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. WEST AND SOUTH SIDES OF BUILDING 251. VIEW TO NORTHEAST. - Rocky Mountain Arsenal, Chlorine Evaporator & Storage Building, 800 feet South of December Seventh Avenue; 600 feet West of D Street, Commerce City, Adams County, CO

  4. Table screen 360-degree holographic display using circular viewing-zone scanning.

    PubMed

    Inoue, Tatsuaki; Takaki, Yasuhiro

    2015-03-09

    A table screen 360-degree holographic display is proposed, with an increased screen size, having an expanded viewing zone over all horizontal directions around the table screen. It consists of a microelectromechanical systems spatial light modulator (MEMS SLM), a magnifying imaging system, and a rotating screen. The MEMS SLM generates hologram patterns at a high frame rate, the magnifying imaging system increases the screen of the MEMS SLM, and the reduced viewing zones are scanned circularly by the rotating screen. The viewing zones are localized to practically realize wavefront reconstruction. An experimental system has been constructed. The generation of 360-degree three-dimensional (3D) images was achieved by scanning 800 reduced and localized viewing zones circularly. The table screen had a diameter of 100 mm, and the frame rate of 3D image generation was 28.4 Hz.

  5. Analysis of view synthesis prediction architectures in modern coding standards

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang

    2013-09-01

    Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.

  6. Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.

    PubMed

    Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C

    2004-11-01

    Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.

  7. Challenges of Replacing NAD 83, NAVD 88, and IGLD 85: Exploiting the Characteristics of 3-D Digital Spatial Data

    NASA Astrophysics Data System (ADS)

    Burkholder, E. F.

    2016-12-01

    One way to address challenges of replacing NAD 83, NGVD 88 and IGLD 85 is to exploit the characteristics of 3-D digital spatial data. This presentation describes the 3-D global spatial data model (GSDM) which accommodates rigorous scientific endeavors while simultaneously supporting a local flat-earth view of the world. The GSDM is based upon the assumption of a single origin for 3-D spatial data and uses rules of solid geometry for manipulating spatial data components. This approach exploits the characteristics of 3-D digital spatial data and preserves the quality of geodetic measurements while providing spatial data users the option of working with rectangular flat-earth components and computational procedures for local applications. This flexibility is provided by using a bidirectional rotation matrix that allows any 3-D vector to be used in a geodetic reference frame for high-end applications and/or the local frame for flat-earth users. The GSDM is viewed as compatible with the datum products being developed by NGS and provides for unambiguous exchange of 3-D spatial data between disciplines and users worldwide. Three geometrical models will be summarized - geodetic, map projection, and 3-D. Geodetic computations are performed on an ellipsoid and are without equal in providing rigorous coordinate values for latitude, longitude, and ellipsoid height. Members of the user community have, for generations, sought ways to "flatten the world" to accommodate a flat-earth view and to avoid the complexity of working on an ellipsoid. Map projections have been defined for a wide variety of applications and remain very useful for visualizing spatial data. But, the GSDM supports computations based on 3-D components that have not been distorted in a 2-D map projection. The GSDM does not invalidate either geodesy or cartographic computational processes but provides a geometrically correct view of any point cloud from any point selected by the user. As a bonus, the GSDM also defines spatial data accuracy and includes procedures for establishing, tracking and using spatial data accuracy - increasingly important in many applications but especially relevant given development of procedures for tracking drones (primarily absolute) and intelligent vehicles (primarily relative).

  8. The PRo3D View Planner - interactive simulation of Mars rover camera views to optimise capturing parameters

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Ortner, Thomas; Hesina, Gerd; Barnes, Robert; Gupta, Sanjeev; Paar, Gerhard

    2017-04-01

    High resolution Digital Terrain Models (DTM) and Digital Outcrop Models (DOM) are highly useful for geological analysis and mission planning in planetary rover missions. PRo3D, developed as part of the EU-FP7 PRoViDE project, is a 3D viewer in which orbital DTMs and DOMs derived from rover stereo imagery can be rendered in a virtual environment for exploration and analysis. It allows fluent navigation over planetary surface models and provides a variety of measurement and annotation tools to complete an extensive geological interpretation. A key aspect of the image collection during planetary rover missions is determining the optimal viewing positions of rover instruments from different positions ('wide baseline stereo'). For the collection of high quality panoramas and stereo imagery the visibility of regions of interest from those positions, and the amount of common features shared by each stereo-pair, or image bundle is crucial. The creation of a highly accurate and reliable 3D surface, in the form of an Ordered Point Cloud (OPC), of the planetary surface, with a low rate of error and a minimum of artefacts, is greatly enhanced by using images that share a high amount of features and a sufficient overlap for wide baseline stereo or target selection. To support users in the selection of adequate viewpoints an interactive View Planner was integrated into PRo3D. The users choose from a set of different rovers and their respective instruments. PRo3D supports for instance the PanCam instrument of ESA's ExoMars 2020 rover mission or the Mastcam-Z camera of NASA's Mars2020 mission. The View Planner uses a DTM obtained from orbiter imagery, which can also be complemented with rover-derived DOMs as the mission progresses. The selected rover is placed onto a position on the terrain - interactively or using the current rover pose as known from the mission. The rover's base polygon and its local coordinate axes, and the chosen instrument's up- and forward vectors are visualised. The parameters of the instrument's pan and tilt unit (PTU) can be altered via the user interface, or alternatively calculated by selecting a target point on the visualised DTM. In the 3D view, the visible region of the planetary surface, resulting from these settings and the camera field-of-view is visualised by a highlighted region with a red border, representing the instruments footprint. The camera view is simulated and rendered in a separate window and PTU parameters can be interactively adjusted, allowing viewpoints, directions, and the expected image to be visualised in real-time in order to allow users the fine-tuning of these settings. In this way, ideal viewpoints and PTU settings for various rover models and instruments can efficiently be defined, resulting in an optimum imagery of the regions of interest.

  9. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  10. Documentation for the “XT3D” option in the Node Property Flow (NPF) Package of MODFLOW 6

    USGS Publications Warehouse

    Provost, Alden M.; Langevin, Christian D.; Hughes, Joseph D.

    2017-08-10

    This report describes the “XT3D” option in the Node Property Flow (NPF) Package of MODFLOW 6. The XT3D option extends the capabilities of MODFLOW by enabling simulation of fully three-dimensional anisotropy on regular or irregular grids in a way that properly takes into account the full, three-dimensional conductivity tensor. It can also improve the accuracy of groundwater-flow simulations in cases in which the model grid violates certain geometric requirements. Three example problems demonstrate the use of the XT3D option to simulate groundwater flow on irregular grids and through three-dimensional porous media with anisotropic hydraulic conductivity.Conceptually, the XT3D method of estimating flow between two MODFLOW 6 model cells can be viewed in terms of three main mathematical steps: construction of head-gradient estimates by interpolation; construction of fluid-flux estimates by application of the full, three-dimensional form of Darcy’s Law, in which the conductivity tensor can be heterogeneous and anisotropic; and construction of the flow expression by enforcement of continuity of flow across the cell interface. The resulting XT3D flow expression, which relates the flow across the cell interface to the values of heads computed at neighboring nodes, is the sum of terms in which conductance-like coefficients multiply head differences, as in the conductance-based flow expression the NPF Package uses by default. However, the XT3D flow expression contains terms that involve “neighbors of neighbors” of the two cells for which the flow is being calculated. These additional terms have no analog in the conductance-based formulation. When assembled into matrix form, the XT3D formulation results in a larger stencil than the conductance-based formulation; that is, each row of the coefficient matrix generally contains more nonzero elements. The “RHS” suboption can be used to avoid expanding the stencil by placing the additional terms on the right-hand side of the matrix equation and evaluating them at the previous iteration or time step.The XT3D option can be an alternative to the Ghost-Node Correction (GNC) Package. However, the XT3D formulation is typically more computationally intensive than the conductance-based formulation the NPF Package uses by default, either with or without ghost nodes. Before deciding whether to use the GNC Package or XT3D option for production runs, the user should consider whether the conductance-based formulation alone can provide acceptable accuracy for the particular problem being solved.

  11. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse

    PubMed Central

    Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.

    2009-01-01

    We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166

  12. 3. General view of Fort Hill Farm, view looking west ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. General view of Fort Hill Farm, view looking west from (B) two-story hall-and-parlor house. Buildings visible, from left to right, are (B) parlor house porch; (E) one-room cabin; (D) center chimney four-room cabin; (J) hay barn; (I) log tobacco barn; (A) mansion, obscured by trees; (M) stable; (K) small barn. - Fort Hill Farm, West of Staunton (Roanoke) River between Turkey & Caesar's Runs, Clover, Halifax County, VA

  13. Optical cross-talk and visual comfort of a stereoscopic display used in a real-time application

    NASA Astrophysics Data System (ADS)

    Pala, S.; Stevens, R.; Surman, P.

    2007-02-01

    Many 3D systems work by presenting to the observer stereoscopic pairs of images that are combined to give the impression of a 3D image. Discomfort experienced when viewing for extended periods may be due to several factors, including the presence of optical crosstalk between the stereo image channels. In this paper we use two video cameras and two LCD panels viewed via a Helmholtz arrangement of mirrors, to display a stereoscopic image inherently free of crosstalk. Simple depth discrimination tasks are performed whilst viewing the 3D image and controlled amounts of image crosstalk are introduced by electronically mixing the video signals. Error monitoring and skin conductance are used as measures of workload as well as traditional subjective questionnaires. We report qualitative measurements of user workload under a variety of viewing conditions. This pilot study revealed a decrease in task performance and increased workload as crosstalk was increased. The observations will assist in the design of further trials planned to be conducted in a medical environment.

  14. Holographic movies

    NASA Astrophysics Data System (ADS)

    Palais, Joseph C.; Miller, Mark E.

    1996-09-01

    A unique method for the construction and display of a 3D holographic movie is developed. An animated film is produced by rotating a 3D object in steps between successive holographic exposures. Strip holograms were made on 70-mm AGFA 8E75 Holotest roll film. Each hologram was about 11-mm high and 55-mm high and 55-mm wide. The object was rotated 2 deg between successive exposures. A complete cycle of the object motion was recorded on 180 holograms using the lensless Fourier transform construction. The ends of the developed film were spliced together to produce a continuous loop. Although the film moves continuously on playback and there is not shutter, there is no flicker or image displacement because of the Fourier transform hologram construction, as predicted by the theoretical analysis. The movie can be viewed for an unlimited time because the object motion is cyclical and the film is continuous. The film is wide enough such that comfortable viewing with both eyes is possible, enhancing the 3D effect. Viewers can stand comfortably away from the film since no viewing slit or aperture is necessary. Several people can simultaneously view the movie.

  15. Stereoscopic depth increases intersubject correlations of brain networks.

    PubMed

    Gaebler, Michael; Biessmann, Felix; Lamke, Jan-Peter; Müller, Klaus-Robert; Walter, Henrik; Hetzer, Stefan

    2014-10-15

    Three-dimensional movies presented via stereoscopic displays have become more popular in recent years aiming at a more engaging viewing experience. However, neurocognitive processes associated with the perception of stereoscopic depth in complex and dynamic visual stimuli remain understudied. Here, we investigate the influence of stereoscopic depth on both neurophysiology and subjective experience. Using multivariate statistical learning methods, we compare the brain activity of subjects when freely watching the same movies in 2D and in 3D. Subjective reports indicate that 3D movies are more strongly experienced than 2D movies. On the neural level, we observe significantly higher intersubject correlations of cortical networks when subjects are watching 3D movies relative to the same movies in 2D. We demonstrate that increases in intersubject correlations of brain networks can serve as neurophysiological marker for stereoscopic depth and for the strength of the viewing experience. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. 3. SOUTH AND WEST SIDES OF BUILDING 525. VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. SOUTH AND WEST SIDES OF BUILDING 525. VIEW TO NORTHEAST. - Rocky Mountain Arsenal, Acetylene Scrubbing Building-Product Development Laboratory, 700 feet South of December Seventh Avenue; 1030 feet East of D Street, Commerce City, Adams County, CO

  17. GENERAL VIEW OF THE MAIN FLOOR LEVEL, PLATFORM DSOUTH, HB3, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    GENERAL VIEW OF THE MAIN FLOOR LEVEL, PLATFORM D-SOUTH, HB-3, FACING NORTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  18. 3. SOUTH AND WEST SIDES OF BUILDING 514. VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. SOUTH AND WEST SIDES OF BUILDING 514. VIEW TO NORTHEAST. - Rocky Mountain Arsenal, Lewisite Reactor & Distilled Mustard Distillation Building, 420 feet South of December Seventh Avenue; 1070 feet East of D Street, Commerce City, Adams County, CO

  19. Closer View of the Equatorial Region of the Sun, March 24, 2007 Anaglyph

    NASA Image and Video Library

    2007-04-27

    NASA Solar TErrestrial RElations Observatory satellites have provided the first 3-dimensional images of the Sun. This view will aid scientists ability to understand solar physics to improve space weather forecasting. 3D glasses are necessary.

  20. Extra dimensions: 3d and time in pdf documentation

    NASA Astrophysics Data System (ADS)

    Graf, N. A.

    2008-07-01

    High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.

  1. Definition of a safe zone for antegrade lag screw fixation of fracture of posterior column of the acetabulum by 3D technology.

    PubMed

    Feng, Xiaoreng; Zhang, Sheng; Luo, Qiang; Fang, Jintao; Lin, Chaowen; Leung, Frankie; Chen, Bin

    2016-03-01

    The objective of this study was to define a safe zone for antegrade lag screw fixation of fracture of posterior column of the acetabulum using a novel 3D technology. Pelvic CT data of 59 human subjects were obtained to reconstruct three-dimensional (3D) models. The transparency of 3D models was then downgraded along the axial perspective (the view perpendicular to the cross section of the posterior column axis) to find the largest translucent area. The outline of the largest translucent area was drawn on the iliac fossa. The line segments of OA, AB, OC, CD, the angles of OAB and OCD that delineate the safe zone (ABDC) were precisely measured. The resultant line segments OA, AB, OC, CD, and angles OAB and OCD were 28.46mm(13.15-44.97mm), 45.89mm (34.21-62.85mm), 36.34mm (18.68-55.56mm), 53.08mm (38.72-75.79mm), 37.44° (24.32-54.96°) and 55.78° (43.97-79.35°) respectively. This study demonstrates that computer-assisted 3D modelling techniques can aid in the precise definition of the safe zone for antegrade insertion of posterior column lag screws. A full-length lag screw can be inserted into the zone (ABDC), permitting a larger operational error. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Innovative technologies to understand hydrogeomorphic impacts of climate change scenarios on gully development in drylands: case study from Ethiopia

    NASA Astrophysics Data System (ADS)

    Frankl, Amaury; Stal, Cornelis; Abraha, Amanuel; De Wulf, Alain; Poesen, Jean

    2014-05-01

    Taking climate change scenarios into account, rainfall patterns are likely to change over the coming decades in eastern Africa. In brief, large parts of eastern Africa are expected to experience a wetting, including seasonality changes. Gullies are threshold phenomena that accomplish most of their geomorphic change during short periods of strong rainfall. Understanding the links between geomorphic change and rainfall characteristics in detail, is thus crucial to ensure the sustainability of future land management. In this study, we present image-based 3D modelling as a low-cost, flexible and rapid method to quantify gully morphology from terrestrial photographs. The methodology was tested on two gully heads in Northern Ethiopia. Ground photographs (n = 88-235) were taken during days with cloud cover. The photographs were processed in PhotoScan software using a semi-automated Structure from Motion-Multi View Stereo (SfM-MVS) workflow. As a result, full 3D models were created, accurate at cm level. These models allow to quantify gully morphology in detail, including information on undercut walls and soil pipe inlets. Such information is crucial for understanding the hydrogeomorphic processes involved. Producing accurate 3D models after each rainfall event, allows to model interrelations between rainfall, land management, runoff and erosion. Expected outcomes are the production of detailed vulnerability maps that allow to design soil and water conservation measures in a cost-effective way. Keywords: 3D model, Ethiopia, Image-based 3D modelling, Gully, PhotoScan, Rainfall.

  3. Extra Dimensions: 3D and Time in PDF Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Norman A.; /SLAC

    2011-11-10

    High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standardmore » Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.« less

  4. Interactive Computer-Enhanced Remote Viewing System (ICERVS): Final report, November 1994--September 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-05-01

    The Interactive Computer-Enhanced Remote Viewing System (ICERVS) is a software tool for complex three-dimensional (3-D) visualization and modeling. Its primary purpose is to facilitate the use of robotic and telerobotic systems in remote and/or hazardous environments, where spatial information is provided by 3-D mapping sensors. ICERVS provides a robust, interactive system for viewing sensor data in 3-D and combines this with interactive geometric modeling capabilities that allow an operator to construct CAD models to match the remote environment. Part I of this report traces the development of ICERVS through three evolutionary phases: (1) development of first-generation software to render orthogonalmore » view displays and wireframe models; (2) expansion of this software to include interactive viewpoint control, surface-shaded graphics, material (scalar and nonscalar) property data, cut/slice planes, color and visibility mapping, and generalized object models; (3) demonstration of ICERVS as a tool for the remediation of underground storage tanks (USTs) and the dismantlement of contaminated processing facilities. Part II of this report details the software design of ICERVS, with particular emphasis on its object-oriented architecture and user interface.« less

  5. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.

  6. STRING 3: An Advanced Groundwater Flow Visualization Tool

    NASA Astrophysics Data System (ADS)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822

  7. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE PAGES

    Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...

    2017-02-16

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  8. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.; Halsey, William; Dehoff, Ryan

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  9. Electrical characterization of FBK small-pitch 3D sensors after γ-ray, neutron and proton irradiations

    NASA Astrophysics Data System (ADS)

    Dalla Betta, G.-F.; Boscardin, M.; Hoeferkamp, M.; Mendicino, R.; Seidel, S.; Sultan, D. M. S.

    2017-11-01

    In view of applications in the tracking detectors at the High Luminosity LHC (HL-LHC), we have developed a new generation of 3D pixel sensors featuring small-pitch (50 × 50 or 25 × 100 μ m2) and thin active layer (~ 100 μ m). Owing to the very short inter-electrode distance (~ 30 μ m), charge trapping effects can be strongly mitigated, making these sensors extremely radiation hard. However, the downscaled sensor structure also lends itself to high electric fields as the bias voltage is increased, motivating investigation of leakage current increase in order to prevent premature electrical breakdown due to impact ionization. In order to assess the characteristics of heavily irradiated samples, using 3D diodes as test devices, we have carried out a dedicated campaign that included several irradiations (γ -rays, neutrons, and protons) at different facilities. In this paper, we report on the electrical characterization of a subset of the irradiated samples, also in comparison to their pre-irradiation properties. Results demonstrate that hadron irradiated devices can be safely operated at a voltage high enough to allow for full depletion (hence high efficiency) also at the maximum fluence foreseen at the HL-LHC.

  10. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  11. Almost Like Being at Bonneville

    NASA Image and Video Library

    2004-03-17

    NASA Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called Bonneville. The rover solar panels can be seen in the foreground. 3D glasses are necessary to view this image.

  12. 3. Historic American Buildings Survey Frederick D. Nichols, Photographer December ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Historic American Buildings Survey Frederick D. Nichols, Photographer December 1937 VIEW OF PRESENT RANCH HOUSE LOOKING WEST - Pete Kitchen Ranch House, Portrero Creek Vicinity, Nogales, Santa Cruz County, AZ

  13. 3. Historic American Buildings Survey Frederick D. Nichols, Photographer September ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Historic American Buildings Survey Frederick D. Nichols, Photographer September 1937 VIEW OF CHURCH - LOOKING NORTHWEST - San Cayetano de Calabasas (Mission, Ruins), Santa Cruz River Vicinity, Nogales, Santa Cruz County, AZ

  14. Construction of a three-dimensional interactive model of the skull base and cranial nerves.

    PubMed

    Kakizawa, Yukinari; Hongo, Kazuhiro; Rhoton, Albert L

    2007-05-01

    The goal was to develop an interactive three-dimensional (3-D) computerized anatomic model of the skull base for teaching microneurosurgical anatomy and for operative planning. The 3-D model was constructed using commercially available software (Maya 6.0 Unlimited; Alias Systems Corp., Delaware, MD), a personal computer, four cranial specimens, and six dry bones. Photographs from at least two angles of the superior and lateral views were imported to the 3-D software. Many photographs were needed to produce the model in anatomically complex areas. Careful dissection was needed to expose important structures in the two views. Landmarks, including foramen, bone, and dura mater, were used as reference points. The 3-D model of the skull base and related structures was constructed using more than 300,000 remodeled polygons. The model can be viewed from any angle. It can be rotated 360 degrees in any plane using any structure as the focal point of rotation. The model can be reduced or enlarged using the zoom function. Variable transparencies could be assigned to any structures so that the structures at any level can be seen. Anatomic labels can be attached to the structures in the 3-D model for educational purposes. This computer-generated 3-D model can be observed and studied repeatedly without the time limitations and stresses imposed by surgery. This model may offer the potential to create interactive surgical exercises useful in evaluating multiple surgical routes to specific target areas in the skull base.

  15. Evaluation of stereoscopic display with visual function and interview

    NASA Astrophysics Data System (ADS)

    Okuyama, Fumio

    1999-05-01

    The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.

  16. Three-dimensional digital breast histopathology imaging

    NASA Astrophysics Data System (ADS)

    Clarke, G. M.; Peressotti, C.; Mawdsley, G. E.; Eidt, S.; Ge, M.; Morgan, T.; Zubovits, J. T.; Yaffe, M. J.

    2005-04-01

    We have developed a digital histology imaging system that has the potential to improve the accuracy of surgical margin assessment in the treatment of breast cancer by providing finer sampling and 3D visualization. The system is capable of producing a 3D representation of histopathology from an entire lumpectomy specimen. We acquire digital photomicrographs of a stack of large (120 x 170 mm) histology slides cut serially through the entire specimen. The images are then registered and displayed in 2D and 3D. This approach dramatically improves sampling and can improve visualization of tissue structures compared to current, small-format histology. The system consists of a brightfield microscope, adapted with a freeze-frame digital video camera and a large, motorized translation stage. The image of each slide is acquired as a mosaic of adjacent tiles, each tile representing one field-of-view of the microscope, and the mosaic is assembled into a seamless composite image. The assembly is done by a program developed to build image sets at six different levels within a multiresolution pyramid. A database-linked viewing program has been created to efficiently register and display the animated stack of images, which occupies about 80 GB of disk space per lumpectomy at full resolution, on a high-resolution (3840 x 2400 pixels) colour monitor. The scanning or tiling approach to digitization is inherently susceptible to two artefacts which disrupt the composite image, and which impose more stringent requirements on system performance. Although non-uniform illumination across any one isolated tile may not be discernible, the eye readily detects this non-uniformity when the entire assembly of tiles is viewed. The pattern is caused by deficiencies in optical alignment, spectrum of the light source, or camera corrections. The imaging task requires that features as small as 3.2 &mum in extent be seamlessly preserved. However, inadequate accuracy in positioning of the translation stage produces visible discontinuities between adjacent features. Both of these effects can distract the viewer from the perception of diagnostically important features. Here we describe the system design and discuss methods for the correction of these artefacts. In addition, we outline our approach to rendering the processing and display of these large images computationally feasible.

  17. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  18. View-Based Models of 3D Object Recognition and Class-Specific Invariance

    DTIC Science & Technology

    1994-04-01

    underlie recognition of geon-like com- ponents (see Edelman, 1991 and Biederman , 1987 ). I(X -_ ta)II1y = (X - ta)TWTW(x -_ ta) (3) View-invariant features...Institute of Technology, 1993. neocortex. Biological Cybernetics, 1992. 14] I. Biederman . Recognition by components: a theory [20] B. Olshausen, C...Anderson, and D. Van Essen. A of human image understanding. Psychol. Review, neural model of visual attention and invariant pat- 94:115-147, 1987 . tern

  19. Repercussion of geometric and dynamic constraints on the 3D rendering quality in structurally adaptive multi-view shooting systems

    NASA Astrophysics Data System (ADS)

    Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine

    2011-12-01

    in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.

  20. 3. NORTH AND EAST SIDES OF BUILDING 1601/1606/1607. VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. NORTH AND EAST SIDES OF BUILDING 1601/1606/1607. VIEW TO SOUTHWEST. - Rocky Mountain Arsenal, Cluster Bomb Assembly-Filling-Storage Building, 3500 feet South of Ninth Avenue; 2870 feet East of D Street, Commerce City, Adams County, CO

  1. 3. Credit BG. Interior view looks northeast (46°) at fire ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Credit BG. Interior view looks northeast (46°) at fire pumps, valves, and emergency generator (powered by an internal combustion engine). - Edwards Air Force Base, North Base, Deluge Water Pumping Station, Near Second & D Streets, Boron, Kern County, CA

  2. Close-up View of an Active Region of the Sun, March 23, 2007 Anaglyph

    NASA Image and Video Library

    2007-04-27

    NASA Solar TErrestrial RElations Observatory satellites have provided the first 3-dimensional images of the Sun. This view will aid scientists ability to understand solar physics to improve space weather forecasting. 3D glasses are necessary.

  3. Immersive Earth Science: Data Visualization in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Skolnik, S.; Ramirez-Linan, R.

    2017-12-01

    Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.

  4. Optimization of spine surgery planning with 3D image templating tools

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Huddleston, Paul M.; Holmes, David R., III; Shridharani, Shyam M.; Robb, Richard A.

    2008-03-01

    The current standard of care for patients with spinal disorders involves a thorough clinical history, physical exam, and imaging studies. Simple radiographs provide a valuable assessment but prove inadequate for surgery planning because of the complex 3-dimensional anatomy of the spinal column and the close proximity of the neural elements, large blood vessels, and viscera. Currently, clinicians still use primitive techniques such as paper cutouts, pencils, and markers in an attempt to analyze and plan surgical procedures. 3D imaging studies are routinely ordered prior to spine surgeries but are currently limited to generating simple, linear and angular measurements from 2D views orthogonal to the central axis of the patient. Complex spinal corrections require more accurate and precise calculation of 3D parameters such as oblique lengths, angles, levers, and pivot points within individual vertebra. We have developed a clinician friendly spine surgery planning tool which incorporates rapid oblique reformatting of each individual vertebra, followed by interactive templating for 3D placement of implants. The template placement is guided by the simultaneous representation of multiple 2D section views from reformatted orthogonal views and a 3D rendering of individual or multiple vertebrae enabling superimposition of virtual implants. These tools run efficiently on desktop PCs typically found in clinician offices or workrooms. A preliminary study conducted with Mayo Clinic spine surgeons using several actual cases suggests significantly improved accuracy of pre-operative measurements and implant localization, which is expected to increase spinal procedure efficiency and safety, and reduce time and cost of the operation.

  5. Scalable 3D image conversion and ergonomic evaluation

    NASA Astrophysics Data System (ADS)

    Kishi, Shinsuke; Kim, Sang Hyun; Shibata, Takashi; Kawai, Takashi; Häkkinen, Jukka; Takatalo, Jari; Nyman, Göte

    2008-02-01

    Digital 3D cinema has recently become popular and a number of high-quality 3D films have been produced. However, in contrast with advances in 3D display technology, it has been pointed out that there is a lack of suitable 3D content and content creators. Since 3D display methods and viewing environments vary widely, there is expectation that high-quality content will be multi-purposed. On the other hand, there is increasing interest in the bio-medical effects of image content of various types and there are moves toward international standardization, so 3D content production needs to take into consideration safety and conformity with international guidelines. The aim of the authors' research is to contribute to the production and application of 3D content that is safe and comfortable to watch by developing a scalable 3D conversion technology. In this paper, the authors focus on the process of changing the screen size, examining a conversion algorithm and its effectiveness. The authors evaluated the visual load imposed during the viewing of various 3D content converted by the prototype algorithm as compared with ideal conditions and with content expanded without conversion. Sheffe's paired comparison method was used for evaluation. To examine the effects of screen size reduction on viewers, changes in user impression and experience were elucidated using the IBQ methodology. The results of the evaluation are presented along with a discussion of the effectiveness and potential of the developed scalable 3D conversion algorithm and future research tasks.

  6. Remote Sensing of Clouds for Solar Forecasting Applications

    NASA Astrophysics Data System (ADS)

    Mejia, Felipe

    A method for retrieving cloud optical depth (tauc) using a UCSD developed ground- based Sky Imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various tauc produced by a Radiative Transfer Model (RTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (SZA), tau c , solar pixel an- gle/scattering angle (SPA), and pixel zenith angle/view angle (PZA). The effects of these parameters are described and the functions for radiance, Ilambda (tau c ,SZA,SPA,PZA) , and the red-blue ratio, RBR(tauc ,SZA,SPA,PZA) , are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for tau c , where RBR increases with tauc up to about tauc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Imeaslambda (SPA,PZA) , in addition to RBRmeas (SPA,PZA ) to obtain a unique solution for tauc . The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min [ MH96a ] method for overcast skies. tau c values ranged from 0-80 with values over 80 being capped and registered as 80. A tauc RMSE of 2.5 between the Min method [ MH96b ] and the USI are observed. The MWR and USI have an RMSE of 2.2 which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms. Using the RRBR tauc estimate as an input we then explore the potential of using tomographic techniques for 3-D cloud reconstruction. The Algebraic Reconstruction Technique (ART) is applied to optical depth maps from sky images to reconstruct 3-D cloud extinction coefficients. Reconstruction accuracy is explored for different products, including surface irradiance, extinction coefficients and Liquid Water Path, as a function of the number of available sky imagers (SIs) and setup distance. Increasing the number of cameras improves the accuracy of the 3-D reconstruction: For surface irradiance, the error decreases significantly up to four imagers at which point the improvements become marginal while k error continues to decrease with more cameras. The ideal distance between imagers was also explored: For a cloud height of 1 km, increasing distance up to 3 km (the domain length) improved the 3-D reconstruction for surface irradiance, while k error continued to decrease with increasing decrease. An iterative reconstruction technique was also used to improve the results of the ART by minimizing the error between input images and reconstructed simulations. For the best case of a nine imager deployment, the ART and iterative method resulted in 53.4% and 33.6% mean average error (MAE) for the extinction coefficients, respectively. The tomographic methods were then tested on real world test cases in the Uni- versity of California San Diego's (UCSD) solar testbed. Five UCSD sky imagers (USI) were installed across the testbed based on the best performing distances in simulations. Topographic obstruction is explored as a source of error by analyzing the increased error with obstruction in the field of view of the horizon. As more of the horizon is obstructed the error increases. If at least a field of view of 70° is available for the camera the accuracy is within 2% of the full field of view. Errors caused by stray light are also explored by removing the circumsolar region from images and comparing the cloud reconstruction to a full image. Removing less than 30% of the circumsolar region image and GHI errors were within 0.2% of the full image while errors in k increased 1%. Removing more than 30° around the sun resulted in inaccurate cloud reconstruction. Using four of the five USI a 3D cloud is reconstructed and compared to the fifth camera. The image of the fifth camera (excluded from the reconstruction) was then simulated and found to have a 22.9% error compared to the ground truth.

  7. Photogrammetry Toolbox Reference Manual

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Burner, Alpheus W.

    2014-01-01

    Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.

  8. Patient-specific quality assurance for the delivery of 60Co intensity modulated radiation therapy subject to a 0.35 T lateral magnetic field

    PubMed Central

    Li, H. Harold; Rodriguez, Vivian L.; Green, Olga L.; Hu, Yanle; Kashani, Rojano; Wooten, H. Omar; Yang, Deshan; Mutic, Sasa

    2014-01-01

    Purpose This work describes a patient-specific dosimetry quality assurance (QA) program for intensity modulated radiation therapy (IMRT) using ViewRay, the first commercial magnetic resonance imaging guided radiation therapy device. Methods and materials The program consisted of the following components: 1) one-dimensional multipoint ionization chamber measurement using a customized 15 cm3 cubic phantom, 2) two-dimensional (2D) radiographic film measurement using a 30×30×20 cm3 phantom with multiple inserted ionization chambers, 3) quasi- three-dimensional (3D) diode array (ArcCHECK) measurement with a centrally inserted ionization chamber, 4) 2D fluence verification using machine delivery log files, and 5) 3D Monte-Carlo (MC) dose reconstruction with machine delivery files and phantom CT. Results The ionization chamber measurements agreed well with treatment planning system (TPS) computed doses in all phantom geometries where the mean difference (mean ± SD) was 0.0% ± 1.3% (n=102, range, −3.0 % to 2.9%). The film measurements also showed excellent agreement with the TPS computed 2D dose distributions where the mean passing rate using 3% relative/3 mm gamma criteria was 94.6% ± 3.4% (n=30, range, 87.4% to 100%). For ArcCHECK measurements, the mean passing rate using 3% relative/3 mm gamma criteria was 98.9% ± 1.1% (n=34, range, 95.8% to 100%). 2D fluence maps with a resolution of 1×1 mm2 showed 100% passing rates for all plan deliveries (n=34). The MC reconstructed doses to the phantom agreed well with planned 3D doses where the mean passing rate using 3% absolute/3 mm gamma criteria was 99.0% ± 1.0% (n=18, range, 97.0% to100%), demonstrating the feasibility of evaluating the QA results in the patient geometry. Conclusions We have developed a dosimetry program for ViewRay’s patient-specific IMRT QA. The methodology will be useful for other ViewRay users. The QA results presented here can assist the RT community to establish appropriate tolerance and action limits for ViewRay’s IMRT QA. PMID:25442343

  9. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  10. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  11. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  12. Virtual embryology: a 3D library reconstructed from human embryo sections and animation of development process.

    PubMed

    Komori, M; Miura, T; Shiota, K; Minato, K; Takahashi, T

    1995-01-01

    The volumetric shape of a human embryo and its development is hard to comprehend as they have been viewed as a 2D schemes in a textbook or microscopic sectional image. In this paper, a CAI and research support system for human embryology using multimedia presentation techniques is described. In this system, 3D data is acquired from a series of sliced specimens. Its 3D structure can be viewed interactively by rotating, extracting, and truncating its whole body or organ. Moreover, the development process of embryos can be animated using a morphing technique applied to the specimen in several stages. The system is intended to be used interactively, like a virtual reality system. Hence, the system is called Virtual Embryology.

  13. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  14. America National Parks Viewed in 3D by NASA MISR Anaglyph 3

    NASA Image and Video Library

    2016-08-25

    Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite is releasing four new anaglyphs that showcase 33 of our nation's national parks, monuments, historical sites and recreation areas in glorious 3D. Shown in the annotated image are Lewis and Clark National Historic Park, Mt. Rainier National Park, Olympic National Park, Ebey's Landing National Historical Reserve, San Juan Island National Historic Park, North Cascades National Park, Lake Chelan National Recreation Area, and Ross Lake National Recreation Area (also Mt. St. Helens National Volcanic Monument, administered by the U.S. Forest Service) MISR views Earth with nine cameras pointed at different angles, giving it the unique capability to produce anaglyphs, stereoscopic images that allow the viewer to experience the landscape in three dimensions. The anaglyphs were made by combining data from MISR's vertical-viewing and 46-degree forward-pointing camera. You will need red-blue glasses in order to experience the 3D effect; ensure you place the red lens over your left eye. The images have been rotated so that north is to the left in order to enable 3D viewing because the Terra satellite flies from north to south. All of the images are 235 miles (378 kilometers) from west to east. These data were acquired May 12, 2012, Orbit 65960. http://photojournal.jpl.nasa.gov/catalog/PIA20891

  15. Characterization and optimization of 3D-LCD module design

    NASA Astrophysics Data System (ADS)

    van Berkel, Cees; Clarke, John A.

    1997-05-01

    Autostereoscopic displays with flat panel liquid crystal display and lenticular sheets are receiving much attention. Multiview 3D-LCD is truly autostereoscopic because no head tracking is necessary and the technology is well poised to become a mass market consumer 3D display medium as the price of liquid crystal displays continues to drop. Making the viewing experience as natural as possible is of prime importance. The main challenges are to reduce the picket fence effect of the black mask and to try to get away with as few perspective views as possible. Our solution is to 'blur' the boundaries between the views. This hides the black mask image by spreading it out and softens the transition between one view and the next, encouraging the user to perceive 'solid objects' instead of a succession of flipping views. One way to achieve this is by introducing a new pixel design in which the pixels are slanted with respect to the column direction. Another way is to place the lenticular at a small (9.46 degree) angle with respect to the LCD columns. The effect of either method is that, as the observer moves sideways in front of the display, he always 'sees' a constant amount of black mask. This renders the black mask, in effect, invisible and eliminates the picket fence effect.

  16. Evaluation of Multiclass Model Observers in PET LROC Studies

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.

    2007-02-01

    A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise

  17. The impact of acquisition angle differences on three-dimensional quantitative coronary angiography.

    PubMed

    Tu, Shengxian; Holm, Niels R; Koning, Gerhard; Maeng, Michael; Reiber, Johan H C

    2011-08-01

    Three-dimensional (3D) quantitative coronary angiography (QCA) requires two angiographic views to restore vessel dimensions. This study investigated the impact of acquisition angle differences (AADs) of the two angiographic views on the assessed dimensions by 3D QCA. X-ray angiograms of an assembled brass phantom with different types of straight lesions were recorded at multiple angiographic projections. The projections were randomly matched as pairs and 3D QCA was performed in those pairs with AAD larger than 25°. The lesion length and diameter stenosis in three different lesions, a circular concentric severe lesion (A), a circular concentric moderate lesion (B), and a circular eccentric moderate lesion (C), were measured by 3D QCA. The acquisition protocol was repeated for a silicone bifurcation phantom, and the bifurcation angles and bifurcation core volume were measured by 3D QCA. The measurements were compared with the true dimensions if applicable and their correlation with AAD was studied. 50 matched pairs of angiographic views were analyzed for the brass phantom. The average value of AAD was 48.0 ± 14.1°. The percent diameter stenosis was slightly overestimated by 3D QCA for all lesions: A (error 1.2 ± 0.9%, P < 0.001); B (error 0.6 ± 0.5%, P < 0.001); C (error 1.1 ± 0.6%, P < 0.001). The correlation of the measurements with AAD was only significant for lesion A (R(2) = 0.151, P = 0.005). The lesion length was slightly overestimated by 3D QCA for lesion A (error 0.06 ± 0.18 mm, P = 0.026), but well assessed for lesion B (error -0.00 ± 0.16 mm, P = 0.950) and lesion C (error -0.01 ± 0.18 mm, P = 0.585). The correlation of the measurements with AAD was not significant for any lesion. Forty matched pairs of angiographic views were analyzed for the bifurcation phantom. The average value of AAD was 49.1 ± 15.4°. 3D QCA slightly overestimated the proximal angle (error 0.4 ± 1.1°, P = 0.046) and the distal angle (error 1.5 ± 1.3°, P < 0.001). The correlation with AAD was only significant for the distal angle (R(2) = 0.256, P = 0.001). The correlation of bifurcation core volume measurements with AAD was not significant (P = 0.750). Of the two aforementioned measurements with significant correlation with AAD, the errors tended to increase as AAD became larger. 3D QCA can be used to reliably assess vessel dimensions and bifurcation angles. Increasing the AAD of the two angiographic views does not increase accuracy and precision of 3D QCA for circular lesions or bifurcation dimensions. Copyright © 2011 Wiley-Liss, Inc.

  18. Full three-dimensional isotropic carpet cloak designed by quasi-conformal transformation optics.

    PubMed

    Silva, Daniely G; Teixeira, Poliane A; Gabrielli, Lucas H; Junqueira, Mateus A F C; Spadoti, Danilo H

    2017-09-18

    A fully three-dimensional carpet cloak presenting invisibility in all viewing angles is theoretically demonstrated. The design is developed using transformation optics and three-dimensional quasi-conformal mapping. Parametrization strategy and numerical optimization of the coordinate transformation deploying a quasi-Newton method is applied. A discussion about the minimum achievable anisotropy in the 3D transformation optics is presented. The method allows to reduce the anisotropy in the cloak and an isotropic medium could be considered. Numerical simulations confirm the strategy employed enabling the design of an isotropic reflectionless broadband carpet cloak independently of the incident light direction and polarization.

  19. Spatial and symbolic queries for 3D image data

    NASA Astrophysics Data System (ADS)

    Benson, Daniel C.; Zick, Gregory L.

    1992-04-01

    We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.

  20. Stressed-out Enceladus 3-D

    NASA Image and Video Library

    2005-03-24

    This high-resolution stereo anaglyph captured by NASA Cassini spacecraft of Saturn moon Enceladus shows a region of craters softened by time and torn apart by tectonic stresses. 3D glasses are necessary to view this image.

  1. America's National Parks 3d (1)

    Atmospheric Science Data Center

    2016-12-30

    article title:  America's National Parks Viewed in 3D by NASA's MISR (Anaglyph 1)   ...         Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle ...

  2. America's National Parks 3d (4)

    Atmospheric Science Data Center

    2017-04-11

    article title:  America's National Parks Viewed in 3D by NASA's MISR (Anaglyph 4)   ...         Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle ...

  3. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  4. Africa in SRTM 3-D, Anaglyph of Shaded Relief

    NASA Image and Video Library

    2004-06-17

    This stereoscopic shaded relief image from NASA Shuttle Radar Topography Mission shows Africa topography. Also shown are Madagascar, the Arabian Peninsula, and other adjacent regions. 3D glasses are necessary to view this image.

  5. The bias of a 2D view: Comparing 2D and 3D mesophyll surface area estimates using non-invasive imaging

    USDA-ARS?s Scientific Manuscript database

    The surface area of the leaf mesophyll exposed to intercellular airspace per leaf area (Sm) is closely associated with CO2 diffusion and photosynthetic rates. Sm is typically estimated from two-dimensional (2D) leaf sections and corrected for the three-dimensional (3D) geometry of mesophyll cells, l...

  6. Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.

    PubMed

    Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong

    2006-04-01

    This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.

  7. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  8. Pathways for Learning from 3D Technology

    PubMed Central

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2016-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D presentations could provide additional sensorial cues (e.g., depth cues) that lead to a higher sense of being surrounded by the stimulus; a connection through general interest such that 3D presentation increases a viewer’s interest that leads to greater attention paid to the stimulus (e.g., "involvement"); and a connection through discomfort, with the 3D goggles causing discomfort that interferes with involvement and thus with memory. The memories of 396 participants who viewed two-dimensional (2D) or 3D movies at movie theaters in Southern California were tested. Within three days of viewing a movie, participants filled out an online anonymous questionnaire that queried them about their movie content memories, subjective movie-going experiences (including emotional reactions and "presence") and demographic backgrounds. The responses to the questionnaire were subjected to path analyses in which several different links between 3D presentation to memory (and other variables) were explored. The results showed there were no effects of 3D presentation, either directly or indirectly, upon memory. However, the largest effects of 3D presentation were on emotions and immersion, with 3D presentation leading to reduced positive emotions, increased negative emotions and lowered immersion, compared to 2D presentations. PMID:28078331

  9. Clinical Assessment of Stereoacuity and 3-D Stereoscopic Entertainment

    PubMed Central

    Tidbury, Laurence P.; Black, Robert H.; O’Connor, Anna R.

    2015-01-01

    Abstract Background/Aims: The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS. Methods: Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips. Results: As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16–62 years) decreased (eg, 110”, 280”, 340”, and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident. Conclusion: If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made. PMID:26669421

  10. Stereo Pair with Landsat Overlay, Mount Meru, Tanzania

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Mount Meru is an active volcano located just 70 kilometers (44 miles)west of Mount Kilimanjaro. It reaches 4,566 meters (14,978 feet) in height but has lost much of its bulk due to an eastward volcanic blast sometime in its distant past, perhaps similar to the eruption of Mount Saint Helens in Washington State in 1980. Mount Meru most recently had a minor eruption about a century ago. The several small cones and craters seen in the vicinity probably reflect numerous episodes of volcanic activity. Mount Meru is the topographic centerpiece of Arusha National Park, but Ngurdoto Crater to the east (image top) is also prominent. The fertile slopes of both volcanoes rise above the surrounding savanna and support a forest that hosts diverse wildlife, including nearly 400 species of birds, and also monkeys and leopards, while the floor of Ngurdoto Crater hosts herds of elephants and buffaloes.

    This stereoscopic image was generated by draping a Landsat satellite image over a Shuttle Radar Topography Mission digital elevation model. Two differing perspectives were then calculated, one for each eye. They can be seen in 3-D by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing or by downloading and printing the image pair and viewing them with a stereoscope. When stereoscopically merged, the result is a vertically exaggerated view of Earth's surface in its full three dimensions.

    Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot)resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 37.1 kilometers (23.0 miles) by 20.3 kilometers (12.6 miles) Location: 3.2 degrees South latitude, 36.7 degrees East longitude Orientation: East at top Image Data: Landsat Bands 3, 2+4, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arc-second (30 meters or 98 feet) Date Acquired: February 2000 (SRTM), February 21, 2000 (Landsat 7)

  11. Recent Developments in the VISRAD 3-D Target Design and Radiation Simulation Code

    NASA Astrophysics Data System (ADS)

    Macfarlane, Joseph; Golovkin, Igor; Sebald, James

    2017-10-01

    The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, Z, and LMJ. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. VISRAD includes a variety of user-friendly graphics for setting up targets and displaying results, can readily display views from any point in space, and can be used to generate image sequences for animations. We will discuss recent improvements to conveniently assess beam capture on target and beam clearance of diagnostic components, as well as plans for future developments.

  12. Full exploration of the Diels-Alder cycloaddition on metallofullerenes M3N@C80 (M = Sc, Lu, Gd): the D(5h) versus I(h) isomer and the influence of the metal cluster.

    PubMed

    Osuna, Sílvia; Valencia, Ramón; Rodríguez-Fortea, Antonio; Swart, Marcel; Solà, Miquel; Poblet, Josep M

    2012-07-16

    In this work a detailed investigation of the exohedral reactivity of the most important and abundant endohedral metallofullerene (EMF) is provided, that is, Sc(3)N@I(h)-C(80) and its D(5h) counterpart Sc(3)N@D(5h)-C(80) , and the (bio)chemically relevant lutetium- and gadolinium-based M(3)N@I(h)/D(5h)-C(80) EMFs (M = Sc, Lu, Gd). In particular, we analyze the thermodynamics and kinetics of the Diels-Alder cycloaddition of s-cis-1,3-butadiene on all the different bonds of the I(h)-C(80) and D(5h)-C(80) cages and their endohedral derivatives. First, we discuss the thermodynamic and kinetic aspects of the cycloaddition reaction on the hollow fullerenes and the two isomers of Sc(3)N@C(80). Afterwards, the effect of the nature of the metal nitride is analyzed in detail. In general, our BP86/TZP//BP86/DZP calculations indicate that [5,6] bonds are more reactive than [6,6] bonds for the two isomers. The [5,6] bond D(5h)-b, which is the most similar to the unique [5,6] bond type in the icosahedral cage, I(h)-a, is the most reactive bond in M(3)N@D(5h)-C(80) regardless of M. Sc(3)N@C(80) and Lu(3)N@C(80) give similar results; the regioselectivity is, however, significantly reduced for the larger and more electropositive M = Gd, as previously found in similar metallofullerenes. Calculations also show that the D(5h) isomer is more reactive from the kinetic point of view than the I(h) one in all cases which is in good agreement with experiments. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Application of full-scale three-dimensional models in patients with rheumatoid cervical spine.

    PubMed

    Mizutani, Jun; Matsubara, Takeshi; Fukuoka, Muneyoshi; Tanaka, Nobuhiko; Iguchi, Hirotaka; Furuya, Aiharu; Okamoto, Hideki; Wada, Ikuo; Otsuka, Takanobu

    2008-05-01

    Full-scale three-dimensional (3D) models offer a useful tool in preoperative planning, allowing full-scale stereoscopic recognition from any direction and distance with tactile feedback. Although skills and implants have progressed with various innovations, rheumatoid cervical spine surgery remains challenging. No previous studies have documented the usefulness of full-scale 3D models in this complicated situation. The present study assessed the utility of full-scale 3D models in rheumatoid cervical spine surgery. Polyurethane or plaster 3D models of 15 full-sized occipitocervical or upper cervical spines were fabricated using rapid prototyping (stereolithography) techniques from 1-mm slices of individual CT data. A comfortable alignment for patients was reproduced from CT data obtained with the patient in a comfortable occipitocervical position. Usefulness of these models was analyzed. Using models as a template, appropriate shape of the plate-rod construct could be created in advance. No troublesome Halo-vests were needed for preoperative adjustment of occipitocervical angle. No patients complained of dysphasia following surgery. Screw entry points and trajectories were simultaneously determined with full-scale dimensions and perspective, proving particularly valuable in cases involving high-riding vertebral artery. Full-scale stereoscopic recognition has never been achieved with any existing imaging modalities. Full-scale 3D models thus appear useful and applicable to all complicated spinal surgeries. The combination of computer-assisted navigation systems and full-scale 3D models appears likely to provide much better surgical results.

  14. 3DProIN: Protein-Protein Interaction Networks and Structure Visualization.

    PubMed

    Li, Hui; Liu, Chunmei

    2014-06-14

    3DProIN is a computational tool to visualize protein-protein interaction networks in both two dimensional (2D) and three dimensional (3D) view. It models protein-protein interactions in a graph and explores the biologically relevant features of the tertiary structures of each protein in the network. Properties such as color, shape and name of each node (protein) of the network can be edited in either 2D or 3D views. 3DProIN is implemented using 3D Java and C programming languages. The internet crawl technique is also used to parse dynamically grasped protein interactions from protein data bank (PDB). It is a java applet component that is embedded in the web page and it can be used on different platforms including Linux, Mac and Window using web browsers such as Firefox, Internet Explorer, Chrome and Safari. It also was converted into a mac app and submitted to the App store as a free app. Mac users can also download the app from our website. 3DProIN is available for academic research at http://bicompute.appspot.com.

  15. Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization.

    PubMed

    Cui, Dongmei; Lynch, James C; Smith, Andrew D; Wilson, Timothy D; Lehman, Michael N

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching anatomy includes use of computed tomography angiography (CTA) images of the head and neck to create clinically relevant 3D stereoscopic virtual models. These high resolution images of the arteries can be used in unique and innovative ways to create 3D virtual models of the vasculature as a tool for teaching anatomy. Blood vessel 3D models are presented stereoscopically in a virtual reality environment, can be rotated 360° in all axes, and magnified according to need. In addition, flexible views of internal structures are possible. Images are displayed in a stereoscopic mode, and students view images in a small theater-like classroom while wearing polarized 3D glasses. Reconstructed 3D models enable students to visualize vascular structures with clinically relevant anatomical variations in the head and neck and appreciate spatial relationships among the blood vessels, the skull and the skin. © 2015 American Association of Anatomists.

  16. Miniaturized fiber-coupled confocal fluorescence microscope with an electrowetting variable focus lens using no moving parts

    PubMed Central

    Ozbay, Baris N.; Losacco, Justin T.; Cormack, Robert; Weir, Richard; Bright, Victor M.; Gopinath, Juliet T.; Restrepo, Diego; Gibson, Emily A.

    2015-01-01

    We report a miniature, lightweight fiber-coupled confocal fluorescence microscope that incorporates an electrowetting variable focus lens to provide axial scanning for full three-dimensional (3D) imaging. Lateral scanning is accomplished by coupling our device to a laser-scanning confocal microscope through a coherent imaging fiber-bundle. The optical components of the device are combined in a custom 3D-printed adapter with an assembled weight of <2 g that can be mounted onto the head of a mouse. Confocal sectioning provides an axial resolution of ~12 µm and an axial scan range of ~80 µm. The lateral field-of-view is 300 µm, and the lateral resolution is 1.8 µm. We determined these parameters by imaging fixed sections of mouse neuronal tissue labeled with green fluorescent protein (GFP) and fluorescent bead samples in agarose gel. To demonstrate viability for imaging intact tissue, we resolved multiple optical sections of ex vivo mouse olfactory nerve fibers expressing yellow fluorescent protein (YFP). PMID:26030555

  17. Target-locking acquisition with real-time confocal (TARC) microscopy.

    PubMed

    Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A

    2007-07-09

    We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.

  18. Reference View Selection in DIBR-Based Multiview Coding.

    PubMed

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  19. User Control and Task Authenticity for Spatial Learning in 3D Environments

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Harper, Barry

    2004-01-01

    This paper describes two empirical studies which investigated the importance for spatial learning of view control and object manipulation within 3D environments. A 3D virtual chemistry laboratory was used as the research instrument. Subjects, who were university undergraduate students (34 in the first study and 80 in the second study), undertook…

  20. Automatic 3D reconstruction of electrophysiology catheters from two-view monoplane C-arm image sequences.

    PubMed

    Baur, Christoph; Milletari, Fausto; Belagiannis, Vasileios; Navab, Nassir; Fallavollita, Pascal

    2016-07-01

    Catheter guidance is a vital task for the success of electrophysiology interventions. It is usually provided through fluoroscopic images that are taken intra-operatively. The cardiologists, who are typically equipped with C-arm systems, scan the patient from multiple views rotating the fluoroscope around one of its axes. The resulting sequences allow the cardiologists to build a mental model of the 3D position of the catheters and interest points from the multiple views. We describe and compare different 3D catheter reconstruction strategies and ultimately propose a novel and robust method for the automatic reconstruction of 3D catheters in non-synchronized fluoroscopic sequences. This approach does not purely rely on triangulation but incorporates prior knowledge about the catheters. In conjunction with an automatic detection method, we demonstrate the performance of our method compared to ground truth annotations. In our experiments that include 20 biplane datasets, we achieve an average reprojection error of 0.43 mm and an average reconstruction error of 0.67 mm compared to gold standard annotation. In clinical practice, catheters suffer from complex motion due to the combined effect of heartbeat and respiratory motion. As a result, any 3D reconstruction algorithm via triangulation is imprecise. We have proposed a new method that is fully automatic and highly accurate to reconstruct catheters in three dimensions.

  1. Perspective View of Shaded Relief with Color as Height, Miyake-Jima, Japan

    NASA Image and Video Library

    2000-08-10

    This 3D perspective view shows the Japanese island called Miyake-Jima viewed from the northeast. This island - about 180 kilometers south of Tokyo - is part of the Izu chain of volcanic islands that runs south from the main Japanese island of Honshu.

  2. NASA Tech Briefs, April 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Wearable Environmental and Physiological Sensing Unit; Broadband Phase Retrieval for Image-Based Wavefront Sensing; Filter Function for Wavefront Sensing Over a Field of View; Iterative-Transform Phase Retrieval Using Adaptive Diversity; Wavefront Sensing With Switched Lenses for Defocus Diversity; Smooth Phase Interpolated Keying; Maintaining Stability During a Conducted-Ripple EMC Test; Photodiode Preamplifier for Laser Ranging With Weak Signals; Advanced High-Definition Video Cameras; Circuit for Full Charging of Series Lithium-Ion Cells; Analog Nonvolatile Computer Memory Circuits; JavaGenes Molecular Evolution; World Wind 3D Earth Viewing; Lithium Dinitramide as an Additive in Lithium Power Cells; Accounting for Uncertainties in Strengths of SiC MEMS Parts; Ion-Conducting Organic/Inorganic Polymers; MoO3 Cathodes for High-Temperature Lithium Thin-Film Cells; Counterrotating-Shoulder Mechanism for Friction Stir Welding; Strain Gauges Indicate Differential-CTE-Induced Failures; Antibodies Against Three Forms of Urokinase; Understanding and Counteracting Fatigue in Flight Crews; Active Correction of Aberrations of Low-Quality Telescope Optics; Dual-Beam Atom Laser Driven by Spinor Dynamics; Rugged, Tunable Extended-Cavity Diode Laser; Balloon for Long-Duration, High-Altitude Flight at Venus; and Wide-Temperature-Range Integrated Operational Amplifier.

  3. Reassessing the 3/4 view effect in face recognition.

    PubMed

    Liu, Chang Hong; Chaudhuri, Avi

    2002-02-01

    It is generally accepted that unfamiliar faces are better recognized if presented in 3/4 view. A common interpretation of this result is that the 3/4 view represents a canonical view for faces. This article presents a critical review of this claim. Two kinds of advantage, in which a 3/4 view either generalizes better to a different view or produces better recognition in the same view, are discussed. Our analysis of the literature shows that the first effect almost invariably depended on different amounts of angular rotation that was present between learning and test views. The advantage usually vanished when angular rotation was equalized between conditions. Reports in favor of the second effect are scant and can be countered by studies reporting negative findings. To clarify this ambiguity, we conducted a recognition experiment. Subjects were trained and tested on the same three views (full-face, 3/4 and profile). The results showed no difference between the three view conditions. Our analysis of the literature, along with the new results, shows that the evidence for a 3/4 view advantage in both categories is weak at best. We suggest that a better predictor of performance for recognition in different views is the angular difference between learning and test views. For recognition in the same view, there may be a wide range of views whose effectiveness is comparable to the 3/4 view.

  4. Full-field x-ray nano-imaging at SSRF

    NASA Astrophysics Data System (ADS)

    Deng, Biao; Ren, Yuqi; Wang, Yudan; Du, Guohao; Xie, Honglan; Xiao, Tiqiao

    2013-09-01

    Full field X-ray nano-imaging focusing on material science is under developing at SSRF. A dedicated full field X-ray nano-imaging beamline based on bending magnet will be built in the SSRF phase-II project. The beamline aims at the 3D imaging of the nano-scale inner structures. The photon energy range is of 5-14keV. The design goals with the field of view (FOV) of 20μm and a spatial resolution of 20nm are proposed at 8 keV, taking a Fresnel zone plate (FZP) with outermost zone width of 25 nm. Futhermore, an X-ray nano-imaging microscope is under developing at the SSRF BL13W beamline, in which a larger FOV will be emphasized. This microscope is based on a beam shaper and a zone plate using both absorption contrast and Zernike phase contrast, with the optimized energy set to 10keV. The detailed design and the progress of the project will be introduced.

  5. Serial Changes in 3-Dimensional Supraspinatus Muscle Volume After Rotator Cuff Repair.

    PubMed

    Chung, Seok Won; Oh, Kyung-Soo; Moon, Sung Gyu; Kim, Na Ra; Lee, Ji Whan; Shim, Eungjune; Park, Sehyung; Kim, Youngjun

    2017-08-01

    There is considerable debate on the recovery of rotator cuff muscle atrophy after rotator cuff repair. To evaluate the serial changes in supraspinatus muscle volume after rotator cuff repair by using semiautomatic segmentation software and to determine the relationship with functional outcomes. Case series; Level of evidence, 4. Seventy-four patients (mean age, 62.8 ± 8.8 years) who underwent arthroscopic rotator cuff repair and obtained 3 consecutive (preoperatively, immediately postoperatively, and later postoperatively [≥1 year postoperatively]) magnetic resonance imaging (MRI) scans having complete Y-views were included. We generated a 3-dimensional (3D) reconstructed model of the supraspinatus muscle by using in-house semiautomatic segmentation software (ITK-SNAP) and calculated both the 2-dimensional (2D) cross-sectional area and 3D volume of the muscle in 3 different views (Y-view, 1 cm medial to the Y-view [Y+1 view], and 2 cm medial to the Y-view [Y+2 view]) at the 3 time points. The area and volume changes at each time point were evaluated according to repair integrity. Later postoperative volumes were compared with immediately postoperative volumes, and their relationship with various clinical factors and the effect of higher volume increases on range of motion, muscle power, and visual analog scale pain and American Shoulder and Elbow Surgeons scores were evaluated. The interrater reliabilities were excellent for all measurements. Areas and volumes increased immediately postoperatively as compared with preoperatively; however, only volumes on the Y+1 view and Y+2 view significantly increased later postoperatively as compared with immediately postoperatively ( P < .05). There were 9 patients with healing failure, and area and volume changes were significantly less later postoperatively compared with immediately postoperatively at all measurement points in these patients ( P < .05). After omitting the patients with healing failure, volume increases later postoperatively became more prominent ( P < .05) in the order of the Y+2 view, Y+1 view, and Y-view. Volume increases were higher in patients who healed successfully with larger tears ( P = .040). Higher volume increases were associated only with an increase in abduction power ( P = .029) and not with other outcomes. The supraspinatus muscle volume increased immediately postoperatively and continuously for at least 1 year after surgery. The increase was evident in patients who had larger tears and healed successfully and when measured toward the more medial portion of the supraspinatus muscle. The volume increases were associated with an increase in shoulder abduction power.

  6. Cobra Hoods Coming At You

    NASA Image and Video Library

    2004-06-17

    This 3-D image taken by the left and right eyes of the panoramic camera on NASA Mars Exploration Rover Spirit shows the odd rock formation dubbed Cobra Hoods center. 3D glasses are necessary to view this image.

  7. 3D Visualization for Planetary Missions

    NASA Astrophysics Data System (ADS)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.

  8. Opportunity Stretches Out 3-D

    NASA Image and Video Library

    2004-02-02

    This is a three-dimensional stereo anaglyph of an image taken by the front hazard-identification camera onboard NASA Mars Exploration Rover Opportunity, showing the rover arm in its extended position. 3D glasses are necessary to view this image.

  9. Anaglyph with Landsat Overlay, Kamchatka Peninsula, Russia

    NASA Image and Video Library

    2000-02-16

    This 3-D anaglyph shows an area on the western side of the volcanically active Kamchatka Peninsula, Russia as seen by the instrument onboard NASA Shuttle Radar Topography Mission. 3D glasses are necessary to view this image.

  10. America's National Parks 3d (2)

    Atmospheric Science Data Center

    2016-12-30

    article title:  America's National Parks Viewed in 3D by NASA's MISR (Anaglyph 2)   ...           Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle ...

  11. CT colonography: influence of 3D viewing and polyp candidate features on interpretation with computer-aided detection.

    PubMed

    Shi, Rong; Schraedley-Desmond, Pamela; Napel, Sandy; Olcott, Eric W; Jeffrey, R Brooke; Yee, Judy; Zalis, Michael E; Margolis, Daniel; Paik, David S; Sherbondy, Anthony J; Sundaram, Padmavathi; Beaulieu, Christopher F

    2006-06-01

    To retrospectively determine if three-dimensional (3D) viewing improves radiologists' accuracy in classifying true-positive (TP) and false-positive (FP) polyp candidates identified with computer-aided detection (CAD) and to determine candidate polyp features that are associated with classification accuracy, with known polyps serving as the reference standard. Institutional review board approval and informed consent were obtained; this study was HIPAA compliant. Forty-seven computed tomographic (CT) colonography data sets were obtained in 26 men and 10 women (age range, 42-76 years). Four radiologists classified 705 polyp candidates (53 TP candidates, 652 FP candidates) identified with CAD; initially, only two-dimensional images were used, but these were later supplemented with 3D rendering. Another radiologist unblinded to colonoscopy findings characterized the features of each candidate, assessed colon distention and preparation, and defined the true nature of FP candidates. Receiver operating characteristic curves were used to compare readers' performance, and repeated-measures analysis of variance was used to test features that affect interpretation. Use of 3D viewing improved classification accuracy for three readers and increased the area under the receiver operating characteristic curve to 0.96-0.97 (P<.001). For TP candidates, maximum polyp width (P=.038), polyp height (P=.019), and preparation (P=.004) significantly affected accuracy. For FP candidates, colonic segment (P=.007), attenuation (P<.001), surface smoothness (P<.001), distention (P=.034), preparation (P<.001), and true nature of candidate lesions (P<.001) significantly affected accuracy. Use of 3D viewing increases reader accuracy in the classification of polyp candidates identified with CAD. Polyp size and examination quality are significantly associated with accuracy. Copyright (c) RSNA, 2006.

  12. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  13. Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces

    NASA Astrophysics Data System (ADS)

    Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf

    2016-06-01

    The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.

  14. 3-D Perspective View, Kamchatka Peninsula, Russia

    NASA Image and Video Library

    2000-03-23

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during NASA Shuttle Radar Topography Mission SRTM.

  15. 5. SOUTHEAST FLAME DEFLECTOR, VIEW TOWARDS NORTHWEST. Glenn L. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. SOUTHEAST FLAME DEFLECTOR, VIEW TOWARDS NORTHWEST. - Glenn L. Martin Company, Titan Missile Test Facilities, CaptiveTest Stand D-3, Waterton Canyon Road & Colorado Highway 121, Lakewood, Jefferson County, CO

  16. Hypersonic Flow over a Cylinder with a Nanosecond Pulse Electrical Discharge

    DTIC Science & Technology

    2014-03-01

    which found the uncertainty in freestream conditions accounted for a 3% variation in bow shock location, but no other factors, including rarefaction ...The curves were Flow X Y Z dρ/dx 200 100 Ring of Polyimide Tape (approx. 0.15 mm thick) a) Side-view Flow Shock Y X Z d /dx 200 100 b) Top-down view

  17. Development of MPEG standards for 3D and free viewpoint video

    NASA Astrophysics Data System (ADS)

    Smolic, Aljoscha; Kimata, Hideaki; Vetro, Anthony

    2005-11-01

    An overview of 3D and free viewpoint video is given in this paper with special focus on related standardization activities in MPEG. Free viewpoint video allows the user to freely navigate within real world visual scenes, as known from virtual worlds in computer graphics. Suitable 3D scene representation formats are classified and the processing chain is explained. Examples are shown for image-based and model-based free viewpoint video systems, highlighting standards conform realization using MPEG-4. Then the principles of 3D video are introduced providing the user with a 3D depth impression of the observed scene. Example systems are described again focusing on their realization based on MPEG-4. Finally multi-view video coding is described as a key component for 3D and free viewpoint video systems. MPEG is currently working on a new standard for multi-view video coding. The conclusion is that the necessary technology including standard media formats for 3D and free viewpoint is available or will be available in the near future, and that there is a clear demand from industry and user side for such applications. 3DTV at home and free viewpoint video on DVD will be available soon, and will create huge new markets.

  18. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  19. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  20. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  1. Intraoperative assessment of reduction and implant placement in acetabular fractures-limitations of 3D-imaging compared to computed tomography.

    PubMed

    Keil, Holger; Beisemann, Nils; Schnetzke, Marc; Vetter, Sven Yves; Swartman, Benedict; Grützner, Paul Alfred; Franke, Jochen

    2018-04-10

    In acetabular fractures, the assessment of reduction and implant placement has limitations in conventional 2D intraoperative imaging. 3D imaging offers the opportunity to acquire CT-like images and thus to improve the results. However, clinical experience shows that even 3D imaging has limitations, especially regarding artifacts when implants are placed. The purpose of this study was to assess the difference between intraoperative 3D imaging and postoperative CT regarding reduction and implant placement. Twenty consecutive cases of acetabular fractures were selected with a complete set of intraoperative 3D imaging and postoperative CT data. The largest detectable step and the largest detectable gap were measured in all three standard planes. These values were compared between the 3D data sets and CT data sets. Additionally, possible correlations between the possible confounders age and BMI and the difference between 3D and CT values were tested. The mean difference of largest visible step between the 3D imaging and CT scan was 2.0 ± 1.8 mm (0.0-5.8, p = 0.02) in the axial, 1.3 ± 1.4 mm (0.0-3.7, p = 0.15) in the sagittal and 1.9 ± 2.4 mm (0.0-7.4, p = 0.22) in the coronal views. The mean difference of largest visible gap between the 3D imaging and CT scan was 3.1 ± 3.6 mm (0.0-14.1, p = 0.03) in the axial, 4.6 ± 2.7 mm (1.2-8.7, p = 0.001) in the sagittal and 3.5 ± 4.0 mm (0.0-15.4, p = 0.06) in the coronal views. A positive correlation between the age and the difference in gap measurements in the sagittal view was shown (rho = 0.556, p = 0.011). Intraoperative 3D imaging is a valuable adjunct in assessing reduction and implant placement in acetabular fractures but has limitations due to artifacts caused by implant material. This can lead to missed malreduction and impairment of clinical outcome, so postoperative CT should be considered in these cases.

  2. 1. FLAME DEFLECTOR FROM FERROCEMENT APRON, VIEW TOWARDS NORTHEAST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. FLAME DEFLECTOR FROM FERROCEMENT APRON, VIEW TOWARDS NORTHEAST. - Glenn L. Martin Company, Titan Missile Test Facilities, CaptiveTest Stand D-3, Waterton Canyon Road & Colorado Highway 121, Lakewood, Jefferson County, CO

  3. Crosstalk in automultiscopic 3-D displays: blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Jain, Ashish; Konrad, Janusz

    2007-02-01

    Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.

  4. Nearwork-induced transient myopia in preadolescent Hong Kong Chinese.

    PubMed

    Wolffsohn, James Stuart; Gilmartin, Bernard; Li, Roger Wing-hong; Edwards, Marion Hastings; Chat, Sandy Wing-shan; Lew, John Kwok-fai; Yu, Bibianna Sin-ying

    2003-05-01

    To compare the magnitude and time course of nearwork-induced transient myopia (NITM) in preadolescent Hong Kong Chinese myopes and emmetropes. Forty-five Hong Kong Chinese children, 35 myopes and 10 emmetropes aged 6 to 12 years (median, 7.5), monocularly viewed a letter target through a Badal lens for 5 minutes at either 5.00- or 2.50-D accommodative demand, followed by 3 minutes of viewing the equivalent target at optical infinity. Accommodative responses were measured continuously with a modified, infrared, objective open-field autorefractor. Accommodative responses were also measured for a countercondition: viewing of a letter target for 5 minutes at optical infinity, followed by 3 minutes of viewing the target at a 5.00-D accommodative demand. The results were compared with tonic accommodation and both subject and family history of refractive error. Retinal-blur-driven NITM was significantly greater in Hong Kong Chinese children with myopic vision than in the emmetropes after both near tasks, but showed no significant dose effect. The NITM was still evident 3 minutes after viewing the 5.00-D near task for 5 minutes. The magnitude of NITM correlated with the accommodative drift after viewing a distant target for more than 4 minutes, but was unrelated to the subjects' or family history of refractive error. In a preadolescent ethnic population with known predisposition to myopia, there is a significant posttask blur-driven accommodative NITM, which is sustained for longer than has previously been found in white adults.

  5. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    PubMed

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  6. Video-Game-Like Engine for Depicting Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Upchurch, Paul R.

    2009-01-01

    GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.

  7. A Closer View of Prominent Rocks - 3-D

    NASA Image and Video Library

    1997-07-13

    Many prominent rocks near the Sagan Memorial Station are featured in this image, from NASA Mars Pathfinder. Shark, Half-Dome, and Pumpkin, Flat Top and Frog are at center 3D glasses are necessary to identify surface detail.

  8. Record Drive Day, Opportunity Sol 383 3-D

    NASA Image and Video Library

    2005-03-05

    On Feb. 19, 2005, NASA Mars Exploration Rover Opportunity set a one-day distance record for martian driving; Opportunity rolled 177.5 meters 582 feet across the plain of Meridiani. 3D glasses are necessary to view this image.

  9. Accuracy of three-dimensional multislice view Doppler in diagnosis of morbid adherent placenta

    PubMed Central

    Abdel Moniem, Alaa M.; Ibrahim, Ahmed; Akl, Sherif A.; Aboul-Enen, Loay; Abdelazim, Ibrahim A.

    2015-01-01

    Objective To detect the accuracy of the three-dimensional multislice view (3D MSV) Doppler in the diagnosis of morbid adherent placenta (MAP). Material and Methods Fifty pregnant women at ≥28 weeks gestation with suspected MAP were included in this prospective study. Two dimensional (2D) trans-abdominal gray-scale ultrasound scan was performed for the subjects to confirm the gestational age, placental location, and findings suggestive of MAP, followed by the 3D power Doppler and then the 3D MSV Doppler to confirm the diagnosis of MAP. Intraoperative findings and histopathology results of removed uteri in cases managed by emergency hysterectomy were compared with preoperative sonographic findings to detect the accuracy of the 3D MSV Doppler in the diagnosis of MAP. Results The 3D MSV Doppler increased the accuracy and predictive values of the diagnostic criteria of MAP compared with the 3D power Doppler. The sensitivity and negative predictive value (NPV) (79.6% and 82.2%, respectively) of crowded vessels over the peripheral sub-placental zone to detect difficult placental separation and considerable intraoperative blood loss in cases of MAP using the 3D power Doppler was increased to 82.6% and 84%, respectively, using the 3D MSV Doppler. In addition, the sensitivity, specificity, and positive predictive value (PPV) (90.9%, 68.8%, and 47%, respectively) of the disruption of the uterine serosa-bladder interface for the detection of emergency hysterectomy in cases of MAP using the 3D power Doppler was increased to 100%, 71.8%, and 50%, respectively, using the 3D MSV Doppler. Conclusion The 3D MSV Doppler is a useful adjunctive tool to the 3D power Doppler or color Doppler to refine the diagnosis of MAP. PMID:26401104

  10. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  11. America National Parks Viewed in 3D by NASA MISR Anaglyph 1

    NASA Image and Video Library

    2016-08-25

    Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite is releasing four new anaglyphs that showcase 33 of our nation's national parks, monuments, historical sites and recreation areas in glorious 3D. Shown in the annotated image are Walnut Canyon National Monument, Sunset Crater Volcano National Monument, Wupatki National Monument, Grand Canyon National Park, Pipe Spring National Monument, Zion National Park, Cedar Breaks National Monument, Bryce Canyon National Park, Capitol Reef National Park, Navajo National Monument, Glen Canyon National Recreation Area, Natural Bridges National Monument, Canyonlands National Park, and Arches National Park. MISR views Earth with nine cameras pointed at different angles, giving it the unique capability to produce anaglyphs, stereoscopic images that allow the viewer to experience the landscape in three dimensions. The anaglyphs were made by combining data from MISR's vertical-viewing and 46-degree forward-pointing camera. You will need red-blue glasses in order to experience the 3D effect; ensure you place the red lens over your left eye. The images have been rotated so that north is to the left in order to enable 3D viewing because the Terra satellite flies from north to south. All of the images are 235 miles (378 kilometers) from west to east. These data were acquired June 18, 2016, Orbit 87774. http://photojournal.jpl.nasa.gov/catalog/PIA20889

  12. Enhancing multi-view autostereoscopic displays by viewing distance control (VDC)

    NASA Astrophysics Data System (ADS)

    Jurk, Silvio; Duckstein, Bernd; Renault, Sylvain; Kuhlmey, Mathias; de la Barré, René; Ebner, Thomas

    2014-03-01

    Conventional multi-view displays spatially interlace various views of a 3D scene and form appropriate viewing channels. However, they only support sufficient stereo quality within a limited range around the nominal viewing distance (NVD). If this distance is maintained, two slightly divergent views are projected to the person's eyes, both covering the entire screen. With increasing deviations from the NVD the stereo image quality decreases. As a major drawback in usability, the manufacturer so far assigns this distance. We propose a software-based solution that corrects false view assignments depending on the distance of the viewer. Our novel approach enables continuous view adaptation based on the calculation of intermediate views and a column-bycolumn rendering method. The algorithm controls each individual subpixel and generates a new interleaving pattern from selected views. In addition, we use color-coded test content to verify its efficacy. This novel technology helps shifting the physically determined NVD to a user-defined distance thereby supporting stereopsis. The recent viewing positions can fall in front or behind the NVD of the original setup. Our algorithm can be applied to all multi-view autostereoscopic displays — independent of the ascent or the periodicity of the optical element. In general, the viewing distance can be corrected with a factor of more than 2.5. By creating a continuous viewing area the visualized 3D content is suitable even for persons with largely divergent intraocular distance — adults and children alike — without any deficiency in spatial perception.

  13. A Novel and Freely Available Interactive 3d Model of the Internal Carotid Artery.

    PubMed

    Valera-Melé, Marc; Puigdellívol-Sánchez, Anna; Mavar-Haramija, Marija; Juanes-Méndez, Juan A; San-Román, Luis; de Notaris, Matteo; Prats-Galino, Alberto

    2018-03-05

    We describe a new and freely available 3D interactive model of the intracranial internal carotid artery (ICA) and the skull base that also allows to display and compare its main segment classifications. High-resolution 3D human angiography (isometric voxel's size 0.36 mm) and Computed Tomography angiography images were exported to Virtual Reality Modeling Language (VRML) format for processing in a 3D software platform and embedding in a 3D Portable Document Format (PDF) document that can be freely downloaded at http://diposit.ub.edu/dspace/handle/2445/112442 and runs under Acrobat Reader on Mac and Windows computers and Windows 10 tablets. The 3D-PDF allows for visualisation and interaction through JavaScript-based functions (including zoom, rotation, selective visualization and transparentation of structures or a predefined sequence view of the main segment classifications if desired). The ICA and its main branches and loops, the Gasserian ganglion, the petrolingual ligament and the proximal and distal dural rings within the skull base environment (anterior and posterior clinoid processes, silla turcica, ethmoid and sphenoid bones, orbital fossae) may be visualized from different perspectives. This interactive 3D-PDF provides virtual views of the ICA and becomes an innovative tool to improve the understanding of the neuroanatomy of the ICA and surrounding structures.

  14. Creating Learning Environment Connecting Engineering Design and 3D Printing

    NASA Astrophysics Data System (ADS)

    Pikkarainen, Ari; Salminen, Antti; Piili, Heidi

    Engineering education in modern days require continuous development in didactics, pedagogics and used practical methods. 3D printing provides excellent opportunity to connect different engineering areas into practice and produce learning by doing applications. The 3D-printing technology used in this study is FDM (Fused deposition modeling). FDM is the most used 3D-printing technology by commercial numbers at the moment and the qualities of the technology makes it popular especially in academic environments. For achieving the best result possible, students will incorporate the principles of DFAM (Design for additive manufacturing) into their engineering design studies together with 3D printing. This paper presents a plan for creating learning environment for mechanical engineering students combining the aspects of engineering design, 3D-CAD learning and AM (additive manufacturing). As a result, process charts for carrying out the 3D printing process from technological point of view and design process for AM from engineering design point of view were created. These charts are used in engineering design education. The learning environment is developed to work also as a platform for Bachelor theses, work-training environment for students, prototyping service centre for cooperation partners and source of information for mechanical engineering education in Lapland University of Applied Sciences.

  15. Using transmission electron microscopy and 3View® to determine collagen fibril size and three-dimensional organization

    PubMed Central

    Mironov, Aleksandr; Cootes, Timothy F.; Holmes, David F.; Kadler, Karl E.

    2017-01-01

    Collagen fibrils are the major tensile element in vertebrate tissues where they occur as ordered bundles in the extracellular matrix. Abnormal fibril assembly and organization results in scarring, fibrosis, poor wound healing and connective tissue diseases. Transmission electron microscopy (TEM) is used to assess formation of the fibrils, predominantly by measuring fibril diameter. Here we describe an enhanced protocol for measuring fibril diameter as well as fibril-volume-fraction, mean fibril length, fibril cross-sectional shape, and fibril 3D organization that are also major determinants of tissue function. Serial section TEM (ssTEM) has been used to visualize fibril 3D-organization in vivo. However, serial block face-scanning electron microscopy (SBF-SEM) has emerged as a time-efficient alternative to ssTEM. The protocol described below is suitable for preparing tissues for TEM and SBF-SEM (by 3View®). We demonstrate the power of 3View® for studying collagen fibril organization in vivo and show how to find and track individual fibrils. Time scale: ~8 days from isolating the tissue to having a 3D image stack. PMID:23807286

  16. A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images

    PubMed Central

    Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade

    2016-01-01

    Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images. PMID:27621874

  17. The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.

  18. Fully automated reconstruction of three-dimensional vascular tree structures from two orthogonal views using computational algorithms and productionrules

    NASA Astrophysics Data System (ADS)

    Liu, Iching; Sun, Ying

    1992-10-01

    A system for reconstructing 3-D vascular structure from two orthogonally projected images is presented. The formidable problem of matching segments between two views is solved using knowledge of the epipolar constraint and the similarity of segment geometry and connectivity. The knowledge is represented in a rule-based system, which also controls the operation of several computational algorithms for tracking segments in each image, representing 2-D segments with directed graphs, and reconstructing 3-D segments from matching 2-D segment pairs. Uncertain reasoning governs the interaction between segmentation and matching; it also provides a framework for resolving the matching ambiguities in an iterative way. The system was implemented in the C language and the C Language Integrated Production System (CLIPS) expert system shell. Using video images of a tree model, the standard deviation of reconstructed centerlines was estimated to be 0.8 mm (1.7 mm) when the view direction was parallel (perpendicular) to the epipolar plane. Feasibility of clinical use was shown using x-ray angiograms of a human chest phantom. The correspondence of vessel segments between two views was accurate. Computational time for the entire reconstruction process was under 30 s on a workstation. A fully automated system for two-view reconstruction that does not require the a priori knowledge of vascular anatomy is demonstrated.

  19. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  20. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    NASA Astrophysics Data System (ADS)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.

  1. Evaluation of DICOM viewer software for workflow integration in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.

    2015-03-01

    The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.

  2. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    NASA Astrophysics Data System (ADS)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  3. Mapping Metals Incorporation of a Whole Single Catalyst Particle Using Element Specific X-ray Nanotomography

    DOE PAGES

    Meirer, Florian; Morris, Darius T.; Kalirai, Sam; ...

    2015-01-02

    Full-field transmission X-ray microscopy has been used to determine the 3D structure of a whole individual fluid catalytic cracking (FCC) particle at high spatial resolution and in a fast, noninvasive manner, maintaining the full integrity of the particle. Using X-ray absorption mosaic imaging to combine multiple fields of view, computed tomography was performed to visualize the macropore structure of the catalyst and its availability for mass transport. We mapped the relative spatial distributions of Ni and Fe using multiple-energy tomography at the respective X-ray absorption K-edges and correlated these distributions with porosity and permeability of an equilibrated catalyst (E-cat) particle.more » Both metals were found to accumulate in outer layers of the particle, effectively decreasing porosity by clogging of pores and eventually restricting access into the FCC particle.« less

  4. Future of photorefractive based holographic 3D display

    NASA Astrophysics Data System (ADS)

    Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.

    2010-02-01

    The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.

  5. Photogrammetry in 3d Modelling of Human Bone Structures from Radiographs

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2017-05-01

    Photogrammetry can have great impact on the success of medical processes for diagnosis, treatment and surgeries. Precise 3D models which can be achieved by photogrammetry improve considerably the results of orthopedic surgeries and processes. Usual 3D imaging techniques, computed tomography (CT) and magnetic resonance imaging (MRI), have some limitations such as being used only in non-weight-bearing positions, costs and high radiation dose(for CT) and limitations of MRI for patients with ferromagnetic implants or objects in their bodies. 3D reconstruction of bony structures from biplanar X-ray images is a reliable and accepted alternative for achieving accurate 3D information with low dose radiation in weight-bearing positions. The information can be obtained from multi-view radiographs by using photogrammetry. The primary step for 3D reconstruction of human bone structure from medical X-ray images is calibration which is done by applying principles of photogrammetry. After the calibration step, 3D reconstruction can be done using efficient methods with different levels of automation. Because of the different nature of X-ray images from optical images, there are distinct challenges in medical applications for calibration step of stereoradiography. In this paper, after demonstrating the general steps and principles of 3D reconstruction from X-ray images, a comparison will be done on calibration methods for 3D reconstruction from radiographs and they are assessed from photogrammetry point of view by considering various metrics such as their camera models, calibration objects, accuracy, availability, patient-friendly and cost.

  6. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  7. Sliced-up Craters 3-D

    NASA Image and Video Library

    2005-03-24

    During its very close flyby of Enceladus on March 9, 2005, NASA Cassini spacecraft took images of parts of the icy moon. This scene is an icy landscape that has been scored by tectonic forces. 3D glasses are necessary to view this image.

  8. Magnetic 3D Cell Culturing

    NASA Image and Video Library

    2017-07-11

    iss052e014201 (7/11/2017) --- NASA astronaut Peggy Whitson uses a microscope to view Magnetic 3D Biocells. This investigation uses magnetized cells and tools to make it easier to handle cells and cultures and to improve the reproducibility of experiments.

  9. Opportunity at the Wall 3-D

    NASA Image and Video Library

    2004-11-23

    NASA Mars Exploration Rover Opportunity reached the base of Burns Cliff, a portion of the inner wall of Endurance Crater in this anaglyph from the rover 285th martian day Nov. 11, 2004. 3D glasses are necessary to view this image.

  10. Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D.

    PubMed

    Lasnier, C J; Allen, S L; Ellis, R E; Fenstermacher, M E; McLean, A G; Meyer, W H; Morris, K; Seppala, L G; Crabtree, K; Van Zeeland, M A

    2014-11-01

    An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.

  11. Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D

    DOE PAGES

    Lasnier, Charles J.; Allen, Steve L.; Ellis, Ronald E.; ...

    2014-08-26

    An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in divertedmore » and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. As a result, demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.« less

  12. Independent and joint associations of TV viewing time and snack food consumption with the metabolic syndrome and its components; a cross-sectional study in Australian adults

    PubMed Central

    2013-01-01

    Background Television (TV) viewing time is positively associated with the metabolic syndrome (MetS) in adults. However, the mechanisms through which TV viewing time is associated with MetS risk remain unclear. There is evidence that the consumption of energy-dense, nutrient poor snack foods increases during TV viewing time among adults, suggesting that these behaviors may jointly contribute towards MetS risk. While the association between TV viewing time and the MetS has previously been shown to be independent of adult’s overall dietary intake, the specific influence of snack food consumption on the relationship is yet to be investigated. The purpose of this study was to examine the independent and joint associations of daily TV viewing time and snack food consumption with the MetS and its components in a sample of Australian adults. Methods Population-based, cross-sectional study of 3,110 women and 2,572 men (>35 years) without diabetes or cardiovascular disease. Participants were recruited between May 1999 and Dec 2000 in the six states and the Northern Territory of Australia. Participants were categorised according to self-reported TV viewing time (low: 0-2 hr/d; high: >2 hr/d) and/or consumption of snack foods (low: 0-3 serves/d; high: >3 serves/d). Multivariate odds ratios [95% CI] for the MetS and its components were estimated using gender-specific, forced entry logistic regression. Results OR [95% CI] for the MetS was 3.59 [2.25, 5.74] (p≤0.001) in women and 1.45 [1.02, 3.45] (p = 0.04) in men who jointly reported high TV viewing time and high snack food consumption. Obesity, insulin resistance and hypertension (women only) were also jointly associated with high TV viewing time and high snack food consumption. Further adjustment for diet quality and central adiposity maintained the associations in women. High snack food consumption was also shown to be independently associated with MetS risk [OR: 1.94 (95% CI: 1.45, 2.60), p < 0.001] and hypertension [OR: 1.43 (95% CI: 1.01, 2.02), p = 0.05] in women only. For both men and women, high TV viewing time was independently associated with the MetS and its individual components (except hypertension). Conclusion TV viewing time and snack food consumption are independently and jointly associated with the MetS and its components, particularly in women. In addition to physical activity, population strategies targeting MetS prevention should address high TV time and excessive snack food intake. PMID:23927043

  13. Independent and joint associations of TV viewing time and snack food consumption with the metabolic syndrome and its components; a cross-sectional study in Australian adults.

    PubMed

    Thorp, Alicia A; McNaughton, Sarah A; Owen, Neville; Dunstan, David W

    2013-08-09

    Television (TV) viewing time is positively associated with the metabolic syndrome (MetS) in adults. However, the mechanisms through which TV viewing time is associated with MetS risk remain unclear. There is evidence that the consumption of energy-dense, nutrient poor snack foods increases during TV viewing time among adults, suggesting that these behaviors may jointly contribute towards MetS risk. While the association between TV viewing time and the MetS has previously been shown to be independent of adult's overall dietary intake, the specific influence of snack food consumption on the relationship is yet to be investigated. The purpose of this study was to examine the independent and joint associations of daily TV viewing time and snack food consumption with the MetS and its components in a sample of Australian adults. Population-based, cross-sectional study of 3,110 women and 2,572 men (>35 years) without diabetes or cardiovascular disease. Participants were recruited between May 1999 and Dec 2000 in the six states and the Northern Territory of Australia. Participants were categorised according to self-reported TV viewing time (low: 0-2 hr/d; high: >2 hr/d) and/or consumption of snack foods (low: 0-3 serves/d; high: >3 serves/d). Multivariate odds ratios [95% CI] for the MetS and its components were estimated using gender-specific, forced entry logistic regression. OR [95% CI] for the MetS was 3.59 [2.25, 5.74] (p≤0.001) in women and 1.45 [1.02, 3.45] (p = 0.04) in men who jointly reported high TV viewing time and high snack food consumption. Obesity, insulin resistance and hypertension (women only) were also jointly associated with high TV viewing time and high snack food consumption. Further adjustment for diet quality and central adiposity maintained the associations in women. High snack food consumption was also shown to be independently associated with MetS risk [OR: 1.94 (95% CI: 1.45, 2.60), p < 0.001] and hypertension [OR: 1.43 (95% CI: 1.01, 2.02), p = 0.05] in women only. For both men and women, high TV viewing time was independently associated with the MetS and its individual components (except hypertension). TV viewing time and snack food consumption are independently and jointly associated with the MetS and its components, particularly in women. In addition to physical activity, population strategies targeting MetS prevention should address high TV time and excessive snack food intake.

  14. Serial sectioning methods for 3D investigations in materials science.

    PubMed

    Zankel, Armin; Wagner, Julian; Poelt, Peter

    2014-07-01

    A variety of methods for the investigation and 3D representation of the inner structure of materials has been developed. In this paper, techniques based on slice and view using scanning microscopy for imaging are presented and compared. Three different methods of serial sectioning combined with either scanning electron or scanning ion microscopy or atomic force microscopy (AFM) were placed under scrutiny: serial block-face scanning electron microscopy, which facilitates an ultramicrotome built into the chamber of a variable pressure scanning electron microscope; three-dimensional (3D) AFM, which combines an (cryo-) ultramicrotome with an atomic force microscope, and 3D FIB, which delivers results by slicing with a focused ion beam. These three methods complement one another in many respects, e.g., in the type of materials that can be investigated, the resolution that can be obtained and the information that can be extracted from 3D reconstructions. A detailed review is given about preparation, the slice and view process itself, and the limitations of the methods and possible artifacts. Applications for each technique are also provided. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A flexible new method for 3D measurement based on multi-view image sequences

    NASA Astrophysics Data System (ADS)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  16. HST,survey views of Hubble after berthing in payload bay on Flight Day 3

    NASA Image and Video Library

    1997-02-13

    S82-E-5140 (13 Feb. 1997) --- A back-lighted full view of the Hubble Space Telescope (HST) in the grasp of the Remote Manipulation System (RMS) following capture early today. The limb of Earth forms part of the background. This view was taken with an Electronic Still Camera (ESC).

  17. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  18. Perspective View with Landsat Overlay, Los Angeles Basin

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Most of Los Angeles is visible in this computer-generated north-northeast perspective viewed from above the Pacific Ocean. In the foreground the hilly Palos Verdes peninsula lies to the left of the harbor at Long Beach, and in the middle distance the various communities that comprise the greater Los Angeles area appear as shades of grey and white. In the distance the San Gabriel Mountains rise up to separate the basin from the Mojave Desert, which can be seen near the top of the image.

    This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and an enhanced color Landsat 5satellite image mosaic. Topographic expression is exaggerated one and one-half times.

    Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR)that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: View width 70 kilometers (42 miles), View distance 160 kilometers(100 miles) Location: 34.0 deg. North lat., 118.2 deg. West lon. Orientation: View north-northeast Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively Date Acquired: February 2000 (SRTM)

  19. Dynamic tracking of prosthetic valve motion and deformation from bi-plane x-ray views: feasibility study

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Wagner, Martin; Raval, Amish N.; Speidel, Michael A.

    2016-03-01

    Transcatheter aortic valve replacement (TAVR) requires navigation and deployment of a prosthetic valve within the aortic annulus under fluoroscopic guidance. To support improved device visualization in this procedure, this study investigates the feasibility of frame-by-frame 3D reconstruction of a moving and expanding prosthetic valve structure from simultaneous bi-plane x-ray views. In the proposed method, a dynamic 3D model of the valve is used in a 2D/3D registration framework to obtain a reconstruction of the valve. For each frame, valve model parameters describing position, orientation, expansion state, and deformation are iteratively adjusted until forward projections of the model match both bi-plane views. Simulated bi-plane imaging of a valve at different signal-difference-to-noise ratio (SDNR) levels was performed to test the approach. 20 image sequences with 50 frames of valve deployment were simulated at each SDNR. The simulation achieved a target registration error (TRE) of the estimated valve model of 0.93 +/- 2.6 mm (mean +/- S.D.) for the lowest SDNR of 2. For higher SDNRs (5 to 50) a TRE of 0.04 mm +/- 0.23 mm was achieved. A tabletop phantom study was then conducted using a TAVR valve. The dynamic 3D model was constructed from high resolution CT scans and a simple expansion model. TRE was 1.22 +/- 0.35 mm for expansion states varying from undeployed to fully deployed, and for moderate amounts of inter-frame motion. Results indicate that it is feasible to use bi-plane imaging to recover the 3D structure of deformable catheter devices.

  20. Dynamic tracking of prosthetic valve motion and deformation from bi-plane x-ray views: feasibility study.

    PubMed

    Hatt, Charles R; Wagner, Martin; Raval, Amish N; Speidel, Michael A

    2016-01-01

    Transcatheter aortic valve replacement (TAVR) requires navigation and deployment of a prosthetic valve within the aortic annulus under fluoroscopic guidance. To support improved device visualization in this procedure, this study investigates the feasibility of frame-by-frame 3D reconstruction of a moving and expanding prosthetic valve structure from simultaneous bi-plane x-ray views. In the proposed method, a dynamic 3D model of the valve is used in a 2D/3D registration framework to obtain a reconstruction of the valve. For each frame, valve model parameters describing position, orientation, expansion state, and deformation are iteratively adjusted until forward projections of the model match both bi-plane views. Simulated bi-plane imaging of a valve at different signal-difference-to-noise ratio (SDNR) levels was performed to test the approach. 20 image sequences with 50 frames of valve deployment were simulated at each SDNR. The simulation achieved a target registration error (TRE) of the estimated valve model of 0.93 ± 2.6 mm (mean ± S.D.) for the lowest SDNR of 2. For higher SDNRs (5 to 50) a TRE of 0.04 mm ± 0.23 mm was achieved. A tabletop phantom study was then conducted using a TAVR valve. The dynamic 3D model was constructed from high resolution CT scans and a simple expansion model. TRE was 1.22 ± 0.35 mm for expansion states varying from undeployed to fully deployed, and for moderate amounts of inter-frame motion. Results indicate that it is feasible to use bi-plane imaging to recover the 3D structure of deformable catheter devices.

Top