Science.gov

Sample records for 3d volumetric images

  1. Computational integral-imaging reconstruction-based 3-D volumetric target object recognition by using a 3-D reference object.

    PubMed

    Kim, Seung-Cheol; Park, Seok-Chan; Kim, Eun-Soo

    2009-12-01

    In this paper, we propose a novel computational integral-imaging reconstruction (CIIR)-based three-dimensional (3-D) image correlator system for the recognition of 3-D volumetric objects by employing a 3-D reference object. That is, a number of plane object images (POIs) computationally reconstructed from the 3-D reference object are used for the 3-D volumetric target recognition. In other words, simultaneous 3-D image correlations between two sets of target and reference POIs, which are depth-dependently reconstructed by using the CIIR method, are performed for effective recognition of 3-D volumetric objects in the proposed system. Successful experiments with this CIIR-based 3-D image correlator confirmed the feasibility of the proposed method.

  2. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  3. 3D imaging provides a high-resolution, volumetric approach for analyzing biofouling.

    PubMed

    First, Matthew R; Policastro, Steven A; Strom, Matthew J; Riley, Scott C; Robbins-Wamsley, Stephanie H; Drake, Lisa A

    2014-01-01

    A volumetric approach for determining the fouling burden on surfaces is presented, consisting of a 3D camera imaging system with fine (5 μm) resolution. Panels immersed in an estuary on the southwest coast of Florida, USA were imaged and the data were used to quantify seasonal changes in the biofouling community. Test panels, which were submerged in seawater for up to one year, were analyzed before and after gentle scrubbing to quantify the biovolume of the total fouling community (ie soft and hard organisms) and the hard fouling community. Total biofouling ranged from 0.01 to 1.16 cm(3) cm(-2) throughout the immersion period; soft fouling constituted 22-87% of the total biovolume. In the future, this approach may be used to inform numerical models of fluid-surface interfaces and to evaluate, with high resolution, the morphology of fouling organisms in response to antifouling technologies.

  4. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  5. FELIX: a volumetric 3D laser display

    NASA Astrophysics Data System (ADS)

    Bahr, Detlef; Langhans, Knut; Gerken, Martin; Vogt, Carsten; Bezecny, Daniel; Homann, Dennis

    1996-03-01

    In this paper, an innovative approach of a true 3D image presentation in a space filling, volumetric laser display will be described. The introduced prototype system is based on a moving target screen that sweeps the display volume. Net result is the optical equivalent of a 3D array of image points illuminated to form a model of the object which occupies a physical space. Wireframe graphics are presented within the display volume which a group of people can walk around and examine simultaneously from nearly any orientation and without any visual aids. Further to the detailed vector scanning mode, a raster scanned system and a combination of both techniques are under development. The volumetric 3D laser display technology for true reproduction of spatial images can tremendously improve the viewers ability to interpret data and to reliably determine distance, shape and orientation. Possible applications for this development range from air traffic control, where moving blips of light represent individual aircrafts in a true to scale projected airspace of an airport, to various medical applications (e.g. electrocardiography, computer-tomography), to entertainment and education visualization as well as imaging in the field of engineering and Computer Aided Design.

  6. Wide-field-of-view image pickup system for multiview volumetric 3D displays using multiple RGB-D cameras

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Kakeya, Hideki

    2014-03-01

    A real-time and wide-field-of-view image pickup system for coarse integral volumetric imaging (CIVI) is realized. This system is to apply CIVI display for live action videos generated by the real-time 3D reconstruction. By using multiple RGB-D cameras from different directions, a complete surface of the objects and a wide field of views can be shown in our CIVI displays. A prototype system is constructed and it works as follows. Firstly, image features and depth data are used for a fast and accurate calibration. Secondly, 3D point cloud data are obtained by each RGB-D camera and they are all converted into the same coordinate system. Thirdly, multiview images are constructed by perspective transformation from different viewpoints. Finally, the image for each viewpoint is divided depending on the depth of each pixel for a volumetric view. The experiments show a better result than using only one RGB-D camera and the whole system works on the real-time basis.

  7. Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images

    NASA Astrophysics Data System (ADS)

    Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim

    2014-04-01

    Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.

  8. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  9. A 3D Level Sets Method for Segmenting the Mouse Spleen and Follicles in Volumetric microCT Images

    SciTech Connect

    Price, Jeffery R; Aykac, Deniz; Wall, Jonathan

    2006-01-01

    We present a semi-automatic, 3D approach for segmenting the mouse spleen, and its interior follicles, in volumetric microCT imagery. Based upon previous 2D level sets work, we develop a fully 3D implementation and provide the corresponding finite difference formulas. We incorporate statistical and proximity weighting schemes to improve segmentation performance. We also note an issue with the original algorithm and propose a solution that proves beneficial in our experiments. Experimental results are provided for artificial and real data.

  10. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  11. True-Depth: a new type of true 3D volumetric display system suitable for CAD, medical imaging, and air-traffic control

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Floating Images, Inc. is developing a new type of volumetric monitor capable of producing a high-density set of points in 3D space. Since the points of light actually exist in space, the resulting image can be viewed with continuous parallax, both vertically and horizontally, with no headache or eyestrain. These 'real' points in space are always viewed with a perfect match between accommodation and convergence. All scanned points appear to the viewer simultaneously, making this display especially suitable for CAD, medical imaging, air-traffic control, and various military applications. This system has the potential to display imagery so accurately that a ruler could be placed within the aerial image to provide precise measurement in any direction. A special virtual imaging arrangement allows the user to superimpose 3D images on a solid object, making the object look transparent. This is particularly useful for minimally invasive surgery in which the internal structure of a patient is visible to a surgeon in 3D. Surgical procedures can be carried out through the smallest possible hole while the surgeon watches the procedure from outside the body as if the patient were transparent. Unlike other attempts to produce volumetric imaging, this system uses no massive rotating screen or any screen at all, eliminating down time due to breakage and possible danger due to potential mechanical failure. Additionally, it is also capable of displaying very large images.

  12. Volumetric (3D) compressive sensing spectral domain optical coherence tomography

    PubMed Central

    Xu, Daguang; Huang, Yong; Kang, Jin U.

    2014-01-01

    In this work, we proposed a novel three-dimensional compressive sensing (CS) approach for spectral domain optical coherence tomography (SD OCT) volumetric image acquisition and reconstruction. Instead of taking a spectral volume whose size is the same as that of the volumetric image, our method uses a sub set of the original spectral volume that is under-sampled in all three dimensions, which reduces the amount of spectral measurements to less than 20% of that required by the Shan-non/Nyquist theory. The 3D image is recovered from the under-sampled spectral data dimension-by-dimension using the proposed three-step CS reconstruction strategy. Experimental results show that our method can significantly reduce the sampling rate required for a volumetric SD OCT image while preserving the image quality. PMID:25426320

  13. Volumetric (3D) compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-11-01

    In this work, we proposed a novel three-dimensional compressive sensing (CS) approach for spectral domain optical coherence tomography (SD OCT) volumetric image acquisition and reconstruction. Instead of taking a spectral volume whose size is the same as that of the volumetric image, our method uses a sub set of the original spectral volume that is under-sampled in all three dimensions, which reduces the amount of spectral measurements to less than 20% of that required by the Shan-non/Nyquist theory. The 3D image is recovered from the under-sampled spectral data dimension-by-dimension using the proposed three-step CS reconstruction strategy. Experimental results show that our method can significantly reduce the sampling rate required for a volumetric SD OCT image while preserving the image quality.

  14. Development of a temporal multiplexed 3D beam-scanning Lissajous trajectory microscope for rapid multimodal volumetric imaging

    NASA Astrophysics Data System (ADS)

    Newman, Justin A.; Sullivan, Shane Z.; Dinh, Janny; Sarkar, Sreya; Simpson, Garth J.

    2016-03-01

    A beam-scanning microscope is described based on a temporally multiplexed Lissajous trajectory for achieving 1 kHz frame rate 3D imaging. The microscope utilizes two fast-scan resonant mirrors to direct the optical beam on a circuitous, Lissajous trajectory through the field of view. Acquisition of two simultaneous focal planes is achieved by implementation of an optical delay line, producing a second incident beam at a different focal plane relative to the initial incident beam. High frame rates are achieved by separating the full time-domain data into shorter sub-trajectories resulting in undersampling of the field of view. A model-based image reconstruction (MBIR) 3D in-painting algorithm is utilized for interpolating the missing data to recover full images. The MBIR algorithm uses a maximum a posteriori estimation with a generalized Gaussian Markov random field prior model for image interpolation. Because images are acquired using photomultiplier tubes or photodiodes, parallelization for multi-channel imaging is straightforward. Preliminary results obtained using a Lissajous trajectory beam-scanning approach coupled with temporal multiplexing by the implementation of an optical delay line demonstrate the ability to acquire 2 distinct focal planes simultaneously at frame rates >450 Hz for full 512 × 512 images. The use of multi-channel data acquisition cards allows for simultaneous multimodal image acquisition with perfect image registry between all imaging modalities. Also discussed here is the implementation of Lissajous trajectory beam-scanning on commercially available microscope hardware.

  15. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire

    NASA Astrophysics Data System (ADS)

    Poon, Kelvin; Hamarneh, Ghassan; Abugharbieh, Rafeef

    2007-03-01

    Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic methods are typically preferred, their success is often hindered by poor image quality and significant variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based segmentation approach with new features designed to primarily enable the handling of complex object topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D Livewire segmentation on every slice.

  16. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  17. Volumetric visualization of 3D data

    NASA Technical Reports Server (NTRS)

    Russell, Gregory; Miles, Richard

    1989-01-01

    In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.

  18. Alpha shape theory for 3D visualization and volumetric measurement of brain tumor progression using magnetic resonance images.

    PubMed

    Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi

    2015-07-01

    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. PMID:25865822

  19. Alpha shape theory for 3D visualization and volumetric measurement of brain tumor progression using magnetic resonance images.

    PubMed

    Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi

    2015-07-01

    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor.

  20. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  1. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    SciTech Connect

    Mishra, Pankaj Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.; Li, Ruijiang

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  2. A framework for automatic construction of 3D PDM from segmented volumetric neuroradiological data sets.

    PubMed

    Fu, Yili; Gao, Wenpeng; Xiao, Yongfei; Liu, Jimin

    2010-03-01

    3D point distribution model (PDM) of subcortical structures can be applied in medical image analysis by providing priori-knowledge. However, accurate shape representation and point correspondence are still challenging for building 3D PDM. This paper presents a novel framework for the automated construction of 3D PDMs from a set of segmented volumetric images. First, a template shape is generated according to the spatial overlap. Then the corresponding landmarks among shapes are automatically identified by a novel hierarchical global-to-local approach, which combines iterative closest point based global registration and active surface model based local deformation to transform the template shape to all other shapes. Finally, a 3D PDM is constructed. Experiment results on four subcortical structures show that the proposed method is able to construct 3D PDMs with a high quality in compactness, generalization and specificity, and more efficient and effective than the state-of-art methods such as MDL and SPHARM. PMID:19631401

  3. Realization of undistorted volumetric multiview image with multilayered integral imaging.

    PubMed

    Kakeya, Hideki

    2011-10-10

    This paper presents a 3D display based on the coarse integral volumetric imaging (CIVI) technique. Though expression of focal effect and specular light is enabled by combining volumetric and multiview solutions, the image qualities of conventional systems have stayed low. In this paper high quality 3D image is attained with the CIVI technology, which compensates distortion and discontinuity of image based on the optical calculations. In addition, compact system design by layering color and monochrome panels is proposed.

  4. Morphological and Volumetric Assessment of Cerebral Ventricular System with 3D Slicer Software.

    PubMed

    Gonzalo Domínguez, Miguel; Hernández, Cristina; Ruisoto, Pablo; Juanes, Juan A; Prats, Alberto; Hernández, Tomás

    2016-06-01

    We present a technological process based on the 3D Slicer software for the three-dimensional study of the brain's ventricular system with teaching purposes. It values the morphology of this complex brain structure, as a whole and in any spatial position, being able to compare it with pathological studies, where its anatomy visibly changes. 3D Slicer was also used to obtain volumetric measurements in order to provide a more comprehensive and detail representation of the ventricular system. We assess the potential this software has for processing high resolution images, taken from Magnetic Resonance and generate the three-dimensional reconstruction of ventricular system. PMID:27147517

  5. 3-D Volumetric Evaluation of Human Mandibular Growth

    PubMed Central

    Reynolds, Mathew; Reynolds, Michael; Adeeb, Samer; El-Bialy, Tarek

    2011-01-01

    Bone growth is a complex process that is controlled by a multitude of mechanisms that are not fully understood.Most of the current methods employed to measure the growth of bones focus on either studying cadaveric bones from different individuals of different ages, or successive two-dimensional (2D) radiographs. Both techniques have their known limitations. The purpose of this study was to explore a technique for quantifying the three dimensional (3D) growth of an adolescent human mandible over the period of one year utilizing cone beam computed tomography (CBCT) scans taken for regular orthodontic records. Three -dimensional virtual models were created from the CBCT data using mainstream medical imaging software. A comparison between computer-generated surface meshes of successive 3-D virtual models illustrates the magnitude of relative mandible growth. The results of this work are in agreement with previously reported data from human cadaveric studies and implantable marker studies. The presented method provides a new relatively simple basis (utilizing commercially available software) to visualize and evaluate individualized 3D (mandibular) growth in vivo. PMID:22046201

  6. Application of a 3D volumetric display for radiation therapy treatment planning I: quality assurance procedures.

    PubMed

    Gong, Xing; Kirk, Michael Collins; Napoli, Josh; Stutsman, Sandy; Zusag, Tom; Khelashvili, Gocha; Chu, James

    2009-07-17

    To design and implement a set of quality assurance tests for an innovative 3D volumetric display for radiation treatment planning applications. A genuine 3D display (Perspecta Spatial 3D, Actuality-Systems Inc., Bedford, MA) has been integrated with the Pinnacle TPS (Philips Medical Systems, Madison WI), for treatment planning. The Perspecta 3D display renders a 25 cm diameter volume that is viewable from any side, floating within a translucent dome. In addition to displaying all 3D data exported from Pinnacle, the system provides a 3D mouse to define beam angles and apertures and to measure distance. The focus of this work is the design and implementation of a quality assurance program for 3D displays and specific 3D planning issues as guided by AAPM Task Group Report 53. A series of acceptance and quality assurance tests have been designed to evaluate the accuracy of CT images, contours, beams, and dose distributions as displayed on Perspecta. Three-dimensional matrices, rulers and phantoms with known spatial dimensions were used to check Perspecta's absolute spatial accuracy. In addition, a system of tests was designed to confirm Perspecta's ability to import and display Pinnacle data consistently. CT scans of phantoms were used to confirm beam field size, divergence, and gantry and couch angular accuracy as displayed on Perspecta. Beam angles were verified through Cartesian coordinate system measurements and by CT scans of phantoms rotated at known angles. Beams designed on Perspecta were exported to Pinnacle and checked for accuracy. Dose at sampled points were checked for consistency with Pinnacle and agreed within 1% or 1 mm. All data exported from Pinnacle to Perspecta was displayed consistently. The 3D spatial display of images, contours, and dose distributions were consistent with Pinnacle display. When measured by the 3D ruler, the distances between any two points calculated using Perspecta agreed with Pinnacle within the measurement error.

  7. DSA volumetric 3D reconstructions of intracranial aneurysms: A pictorial essay

    PubMed Central

    Cieściński, Jakub; Serafin, Zbigniew; Strześniewski, Piotr; Lasek, Władysław; Beuth, Wojciech

    2012-01-01

    Summary A gold standard of cerebral vessel imaging remains the digital subtraction angiography (DSA) performed in three projections. However, in specific clinical cases, many additional projections are required, or a complete visualization of a lesion may even be impossible with 2D angiography. Three-dimensional (3D) reconstructions of rotational angiography were reported to improve the performance of DSA significantly. In this pictorial essay, specific applications of this technique are presented in the management of intracranial aneurysms, including: preoperative aneurysm evaluation, intraoperative imaging, and follow-up. Volumetric reconstructions of 3D DSA are a valuable tool for cerebral vessels imaging. They play a vital role in the assessment of intracranial aneurysms, especially in evaluation of the aneurysm neck and the aneurysm recanalization. PMID:22844309

  8. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  9. Three-dimensional volumetric object reconstruction using computational integral imaging.

    PubMed

    Hong, Seung-Hyun; Jang, Ju-Seog; Javidi, Bahram

    2004-02-01

    We propose a three-dimensional (3D) imaging technique that can sense a 3D scene and computationally reconstruct it as a 3D volumetric image. Sensing of the 3D scene is carried out by obtaining elemental images optically using a pickup microlens array and a detector array. Reconstruction of volume pixels of the scene is accomplished by computationally simulating optical reconstruction according to ray optics. The entire pixels of the recorded elemental images contribute to volumetric reconstruction of the 3D scene. Image display planes at arbitrary distances from the display microlens array are computed and reconstructed by back propagating the elemental images through a computer synthesized pinhole array based on ray optics. We present experimental results of 3D image sensing and volume pixel reconstruction to test and verify the performance of the algorithm and the imaging system. The volume pixel values can be used for 3D image surface reconstruction.

  10. Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

    NASA Astrophysics Data System (ADS)

    Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.

    2014-12-01

    Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

  11. Average Cross-Sectional Area of DebriSat Fragments Using Volumetrically Constructed 3D Representations

    NASA Technical Reports Server (NTRS)

    Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.

    2016-01-01

    Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for

  12. The effect of volumetric (3D) tactile symbols within inclusive tactile maps.

    PubMed

    Gual, Jaume; Puyuelo, Marina; Lloveras, Joaquim

    2015-05-01

    Point, linear and areal elements, which are two-dimensional and of a graphic nature, are the morphological elements employed when designing tactile maps and symbols for visually impaired users. However, beyond the two-dimensional domain, there is a fourth group of elements - volumetric elements - which mapmakers do not take sufficiently into account when it comes to designing tactile maps and symbols. This study analyses the effect of including volumetric, or 3D, symbols within a tactile map. In order to do so, the researchers compared two tactile maps. One of them uses only two-dimensional elements and is produced using thermoforming, one of the most popular systems in this field, while the other includes volumetric symbols, thus highlighting the possibilities opened up by 3D printing, a new area of production. The results of the study show that including 3D symbols improves the efficiency and autonomous use of these products. PMID:25683526

  13. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  14. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  15. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability. PMID:25207828

  16. Combined elasticity and 3D imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Li, Yinbo; Hossack, John A.

    2005-04-01

    A method is described for repeatably assessing elasticity and 3D extent of suspected prostate cancers. Elasticity is measured by controlled water inflation of a sheath placed over a modified transrectal ultrasound transducer. The benefit of using fluid inflation is that it should be possible to make repeatable, accurate, measurements of elasticity that are of interest in the serial assessment of prostate cancer progression or remission. The second aspect of the work uses auxiliary tracking arrays placed at each end of the central imaging array that allow the transducer to be rotated while simultaneously collected 'tracking' information thus allowing the position of successive image planes to be located with approximately 11% volumetric accuracy in 3D space. In this way, we present a technique for quantifying volumetric extent of suspected cancer in addition to making measures of elastic anomalies.

  17. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  18. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  19. PSF engineering in multifocus microscopy for increased depth volumetric imaging.

    PubMed

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-03-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  20. PSF engineering in multifocus microscopy for increased depth volumetric imaging

    PubMed Central

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-01-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  1. Perception of detail in 3D images

    NASA Astrophysics Data System (ADS)

    Heynderickx, Ingrid; Kaptein, Ronald

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads to blurring or ghosting, and therefore to a decrease in perceived sharpness. However, people watching stereoscopic videos have reported that the 3D scene contained more details, compared to the 2D scene with identical spatial resolution. This is an interesting notion, that has never been tested in a systematic and quantitative way. To investigate this effect, we had people compare the amount of detail ("detailedness") in pairs of 2D and 3D images. A blur filter was applied to one of the two images, and the blur level was varied using an adaptive staircase procedure. In this way, the blur threshold for which the 2D and 3D image contained perceptually the same amount of detail could be found. Our results show that the 3D image needed to be blurred more than the 2D image. This confirms the earlier qualitative findings that 3D images contain perceptually more details than 2D images with the same spatial resolution.

  2. Current issues on 3D volumetric positioning accuracy: measurement, compensation, and definition

    NASA Astrophysics Data System (ADS)

    Wang, C.

    2008-10-01

    Traditionally, manufacturers have ensured part accuracy by linear calibration of each machine tool axis. The conventional definition of the 3-D volumetric positioning error is the root mean square of the three-axis displacement error. 20 years ago, the dominate error is the lead screw pitch error of 3 axes. This definition is adequate. However, now the machine accuracy has been improved with better lead screw, linear encoder and compensation, the dominate errors become the squareness errors and straightness errors. Hence the above definition is inadequate. During the past years, the industry has seen demand emerge for the "volumetric accuracy" specification on machine tools. One hurdle remains: a standard definition so that everyone measures volumetric accuracy with the same yardstick. The issue has been discussed in many Standards Committees, machine tool builders and the metrology community. Reported here are, a new 3D volumetric positioning error measurement and compensation technique, proposed definitions or measures of 3 D volumetric positioning errors of a CNC machine tool, and its verification.

  3. Laboratory 3D Micro-XRF/Micro-CT Imaging System

    NASA Astrophysics Data System (ADS)

    Bruyndonckx, P.; Sasov, A.; Liu, X.

    2011-09-01

    A prototype micro-XRF laboratory system based on pinhole imaging was developed to produce 3D elemental maps. The fluorescence x-rays are detected by a deep-depleted CCD camera operating in photon-counting mode. A charge-clustering algorithm, together with dynamically adjusted exposure times, ensures a correct energy measurement. The XRF component has a spatial resolution of 70 μm and an energy resolution of 180 eV at 6.4 keV. The system is augmented by a micro-CT imaging modality. This is used for attenuation correction of the XRF images and to co-register features in the 3D XRF images with morphological structures visible in the volumetric CT images of the object.

  4. Evaluation of 3D imaging.

    PubMed

    Vannier, M W

    2000-10-01

    Interactive computer-based simulation is gaining acceptance for craniofacial surgical planning. Subjective visualization without objective measurement capability, however, severely limits the value of simulation since spatial accuracy must be maintained. This study investigated the error sources involved in one method of surgical simulation evaluation. Linear and angular measurement errors were found to be within +/- 1 mm and 1 degree. Surface match of scanned objects was slightly less accurate, with errors up to 3 voxels and 4 degrees, and Boolean subtraction methods were 93 to 99% accurate. Once validated, these testing methods were applied to objectively compare craniofacial surgical simulations to post-operative outcomes, and verified that the form of simulation used in this study yields accurate depictions of surgical outcome. However, to fully evaluate surgical simulation, future work is still required to test the new methods in sufficient numbers of patients to achieve statistically significant results. Once completely validated, simulation cannot only be used in pre-operative surgical planning, but also as a post-operative descriptor of surgical and traumatic physical changes. Validated image comparison methods can also show discrepancy of surgical outcome to surgical plan, thus allowing evaluation of surgical technique. PMID:11098409

  5. Integral volumetric imaging using decentered elemental lenses.

    PubMed

    Sawada, Shimpei; Kakeya, Hideki

    2012-11-01

    This paper proposes a high resolution integral imaging system using a lens array composed of non-uniform decentered elemental lenses. One of the problems of integral imaging is the trade-off relationship between the resolution and the number of views. When the number of views is small, motion parallax becomes strongly discrete to maintain the viewing angle. In order to overcome this trade-off, the proposed method uses the elemental lenses whose size is smaller than that of the elemental images. To keep the images generated by the elemental lenses at constant depth, the lens array is designed so that the optical centers of elemental lenses may be located in the centers of elemental images, not in the centers of elemental lenses. To compensate optical distortion, new image rendering algorithm is developed so that undistorted 3D image may be presented with a non-uniform lens array. The proposed design of lens array can be applied to integral volumetric imaging, where display panels are layered to show volumetric images in the scheme of integral imaging.

  6. An interface for precise and comfortable 3D work with volumetric medical datasets.

    PubMed

    Serra, L; Hern, N; Guan, C G; Lee, E; Lee, Y H; Yeo, T T; Chan, C; Kockro, R A

    1999-01-01

    We have developed a 3D/2D paradigm of interaction that combines manipulation of precise 3D volumetric data with unambiguous widget interaction. Precise 3D interaction is ensured by a combination of resting the lower arms on an armrest and pivoting the hands around the wrist. Unambiguous 2D interaction is achieved by providing passive haptic feedback by means of a virtual control panel whose position coincides with the physical surfaces encasing the system. We have tested this interface with a neurosurgical planning application that has been clinically used for 17 skull-base cases at two local hospitals. PMID:10538381

  7. An interface for precise and comfortable 3D work with volumetric medical datasets.

    PubMed

    Serra, L; Hern, N; Guan, C G; Lee, E; Lee, Y H; Yeo, T T; Chan, C; Kockro, R A

    1999-01-01

    We have developed a 3D/2D paradigm of interaction that combines manipulation of precise 3D volumetric data with unambiguous widget interaction. Precise 3D interaction is ensured by a combination of resting the lower arms on an armrest and pivoting the hands around the wrist. Unambiguous 2D interaction is achieved by providing passive haptic feedback by means of a virtual control panel whose position coincides with the physical surfaces encasing the system. We have tested this interface with a neurosurgical planning application that has been clinically used for 17 skull-base cases at two local hospitals.

  8. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  9. Volumetric Echocardiographic Particle Image Velocimetry (V-Echo-PIV)

    NASA Astrophysics Data System (ADS)

    Falahatpisheh, Ahmad; Kheradvar, Arash

    2015-11-01

    Measurement of 3D flow field inside the cardiac chambers has proven to be a challenging task. Current laser-based 3D PIV methods estimate the third component of the velocity rather than directly measuring it and also cannot be used to image the opaque heart chambers. Modern echocardiography systems are equipped with 3D probes that enable imaging the entire 3D opaque field. However, this feature has not yet been employed for 3D vector characterization of blood flow. For the first time, we introduce a method that generates velocity vector field in 4D based on volumetric echocardiographic images. By assuming the conservation of brightness in 3D, blood speckles are tracked. A hierarchical 3D PIV method is used to account for large particle displacement. The discretized brightness transport equation is solved in a least square sense in interrogation windows of size 163 voxels. We successfully validate the method in analytical and experimental cases. Volumetric echo data of a left ventricle is then processed in the systolic phase. The expected velocity fields were successfully predicted by V-Echo-PIV. In this work, we showed a method to image blood flow in 3D based on volumetric images of human heart using no contrast agent.

  10. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  11. Extending a teleradiology system by tools for 3D-visualization and volumetric analysis through a plug-in mechanism.

    PubMed

    Evers, H; Mayer, A; Engelmann, U; Schröter, A; Baur, U; Wolsiffer, K; Meinzer, H P

    1998-01-01

    This paper describes ongoing research concerning interactive volume visualization coupled with tools for volumetric analysis. To establish an easy to use application, the 3D-visualization has been embedded in a state of the art teleradiology system, where additional functionality is often desired beyond basic image transfer and management. Major clinical requirements for deriving spatial measures are covered by the tools, in order to realize extended diagnosis support and therapy planning. Introducing the general plug-in mechanism this work exemplarily describes the useful extension of an approved application. Interactive visualization was achieved by a hybrid approach taking advantage of both the precise volume visualization based on the Heidelberg Raytracing Model and the graphics acceleration of modern workstations. Several tools for volumetric analysis extend the 3D-viewing. They offer 3D-pointing devices to select locations in the data volume, measure anatomical structures or control segmentation processes. A haptic interface provides a realistic perception while navigating within the 3D-reconstruction. The work is closely related to research work in the field of heart, liver and head surgery. In cooperation with our medical partners the development of tools as presented proceed the integration of image analysis into clinical routine. PMID:10384617

  12. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  13. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  14. The Space {B^{-1}_{∞, ∞}} , Volumetric Sparseness, and 3D NSE

    NASA Astrophysics Data System (ADS)

    Farhat, Aseel; Grujić, Zoran; Leitmeyer, Keith

    2016-09-01

    In the context of the {L^∞} -theory of the 3D NSE, it is shown that smallness of a solution in Besov space {B^{-1}_{∞, ∞}} suffices to prevent a possible blow-up. In particular, it is revealed that the aforementioned condition implies a particular local spatial structure of the regions of high velocity magnitude, namely, the structure of local volumetric sparseness on the scale comparable to the radius of spatial analyticity measured in {L^∞}.

  15. Realization of an aerial 3D image that occludes the background scenery.

    PubMed

    Kakeya, Hideki; Ishizuka, Shuta; Sato, Yuya

    2014-10-01

    In this paper we describe an aerial 3D image that occludes far background scenery based on coarse integral volumetric imaging (CIVI) technology. There have been many volumetric display devices that present floating 3D images, most of which have not reproduced the visual occlusion. CIVI is a kind of multilayered integral imaging and realizes an aerial volumetric image with visual occlusion by combining multiview and volumetric display technologies. The conventional CIVI, however, cannot show a deep space, for the number of layered panels is limited because of the low transmittance of each panel. To overcome this problem, we propose a novel optical design to attain an aerial 3D image that occludes far background scenery. In the proposed system, a translucent display panel with 120 Hz refresh rate is located between the CIVI system and the aerial 3D image. The system modulates between the aerial image mode and the background image mode. In the aerial image mode, the elemental images are shown on the CIVI display and the inserted translucent display is uniformly translucent. In the background image mode, the black shadows of the elemental images in a white background are shown on the CIVI display and the background scenery is displayed on the inserted translucent panel. By alternation of these two modes at 120 Hz, an aerial 3D image that visually occludes the far background scenery is perceived by the viewer.

  16. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  17. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  18. An inverse hyper-spherical harmonics-based formulation for reconstructing 3D volumetric lung deformations

    NASA Astrophysics Data System (ADS)

    Santhanam, Anand P.; Min, Yugang; Mudur, Sudhir P.; Rastogi, Abhinav; Ruddy, Bari H.; Shah, Amish; Divo, Eduardo; Kassab, Alain; Rolland, Jannick P.; Kupelian, Patrick

    2010-07-01

    A method to estimate the deformation operator for the 3D volumetric lung dynamics of human subjects is described in this paper. For known values of air flow and volumetric displacement, the deformation operator and subsequently the elastic properties of the lung are estimated in terms of a Green's function. A Hyper-Spherical Harmonic (HSH) transformation is employed to compute the deformation operator. The hyper-spherical coordinate transformation method discussed in this paper facilitates accounting for the heterogeneity of the deformation operator using a finite number of frequency coefficients. Spirometry measurements are used to provide values for the airflow inside the lung. Using a 3D optical flow-based method, the 3D volumetric displacement of the left and right lungs, which represents the local anatomy and deformation of a human subject, was estimated from 4D-CT dataset. Results from an implementation of the method show the estimation of the deformation operator for the left and right lungs of a human subject with non-small cell lung cancer. Validation of the proposed method shows that we can estimate the Young's modulus of each voxel within a 2% error level.

  19. Parallel implementation of 3D FFT with volumetric decomposition schemes for efficient molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Imamura, Toshiyuki; Sugita, Yuji

    2016-03-01

    Three-dimensional Fast Fourier Transform (3D FFT) plays an important role in a wide variety of computer simulations and data analyses, including molecular dynamics (MD) simulations. In this study, we develop hybrid (MPI+OpenMP) parallelization schemes of 3D FFT based on two new volumetric decompositions, mainly for the particle mesh Ewald (PME) calculation in MD simulations. In one scheme, (1d_Alltoall), five all-to-all communications in one dimension are carried out, and in the other, (2d_Alltoall), one two-dimensional all-to-all communication is combined with two all-to-all communications in one dimension. 2d_Alltoall is similar to the conventional volumetric decomposition scheme. We performed benchmark tests of 3D FFT for the systems with different grid sizes using a large number of processors on the K computer in RIKEN AICS. The two schemes show comparable performances, and are better than existing 3D FFTs. The performances of 1d_Alltoall and 2d_Alltoall depend on the supercomputer network system and number of processors in each dimension. There is enough leeway for users to optimize performance for their conditions. In the PME method, short-range real-space interactions as well as long-range reciprocal-space interactions are calculated. Our volumetric decomposition schemes are particularly useful when used in conjunction with the recently developed midpoint cell method for short-range interactions, due to the same decompositions of real and reciprocal spaces. The 1d_Alltoall scheme of 3D FFT takes 4.7 ms to simulate one MD cycle for a virus system containing more than 1 million atoms using 32,768 cores on the K computer.

  20. Dynamic 3D computed tomography scanner for vascular imaging

    NASA Astrophysics Data System (ADS)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  1. Automating Shallow 3D Seismic Imaging

    SciTech Connect

    Steeples, Don; Tsoflias, George

    2009-01-15

    Our efforts since 1997 have been directed toward developing ultra-shallow seismic imaging as a cost-effective method applicable to DOE facilities. This report covers the final year of grant-funded research to refine 3D shallow seismic imaging, which built on a previous 7-year grant (FG07-97ER14826) that refined and demonstrated the use of an automated method of conducting shallow seismic surveys; this represents a significant departure from conventional seismic-survey field procedures. The primary objective of this final project was to develop an automated three-dimensional (3D) shallow-seismic reflection imaging capability. This is a natural progression from our previous published work and is conceptually parallel to the innovative imaging methods used in the petroleum industry.

  2. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  3. Volumetric CT-based segmentation of NSCLC using 3D-Slicer.

    PubMed

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H; van Baardwijk, Angela; Fennessy, Fiona M; Lewis, John H; De Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J W L

    2013-01-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the "gold standard". The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81-0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck. PMID:24346241

  4. Wave-CAIPI for Highly Accelerated 3D Imaging

    PubMed Central

    Bilgic, Berkin; Gagoski, Borjan A.; Cauley, Stephen F.; Fan, Audrey P.; Polimeni, Jonathan R.; Grant, P. Ellen; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To introduce the Wave-CAIPI (Controlled Aliasing in Parallel Imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. Methods The Wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line, while modifying the 3D phase encoding strategy to incur inter-slice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high quality image reconstruction with Wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and Quantitative Susceptibility Mapping (QSM). Results Wave-CAIPI enables full-brain gradient echo (GRE) acquisition at 1 mm isotropic voxel size and R=3×3 acceleration with maximum g-factors of 1.08 at 3T, and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies 2D-CAIPI and Bunched Phase Encoding, Wave-CAIPI yields up to 2-fold reduction in maximum g-factor for 9-fold acceleration at both field strengths. Conclusion Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. PMID:24986223

  5. Floating volumetric image formation using a dihedral corner reflector array device.

    PubMed

    Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuki; Yamamoto, Siori; Mukai, Takaaki; Maekawa, Satoshi

    2013-01-01

    A volumetric display system using an optical imaging device consisting of numerous dihedral corner reflectors placed perpendicular to the surface of a metal plate is proposed. Image formation by the dihedral corner reflector array (DCRA) is free from distortion and focal length. In the proposed volumetric display system, a two-dimensional real image is moved by a mirror scanner to scan a three-dimensional (3D) space. Cross-sectional images of a 3D object are displayed in accordance with the position of the image plane. A volumetric image is observed as a stack of the cross-sectional images. The use of the DCRA brings compact system configuration and volumetric real image generation with very low distortion. An experimental volumetric display system including a DCRA, a galvanometer mirror, and a digital micro-mirror device was constructed to verify the proposed method. A volumetric image consisting of 1024×768×400 voxels was formed by the experimental system.

  6. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  7. Real-time 3D visualization of volumetric video motion sensor data

    SciTech Connect

    Carlson, J.; Stansfield, S.; Shawver, D.; Flachs, G.M.; Jordan, J.B.; Bao, Z.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to be immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.

  8. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  9. Compression of medical volumetric datasets: physical and psychovisual performance comparison of the emerging JP3D standard and JPEG2000

    NASA Astrophysics Data System (ADS)

    Kimpe, T.; Bruylants, T.; Sneyders, Y.; Deklerck, R.; Schelkens, P.

    2007-03-01

    The size of medical data has increased significantly over the last few years. This poses severe problems for the rapid transmission of medical data across the hospital network resulting into longer access times of the images. Also longterm storage of data becomes more and more a problem. In an attempt to overcome the increasing data size often lossless or lossy compression algorithms are being used. This paper compares the existing JPEG2000 compression algorithm and the new emerging JP3D standard for compression of volumetric datasets. The main benefit of JP3D is that this algorithm truly is a 3D compression algorithm that exploits correlation not only within but also in between slices of a dataset. We evaluate both lossless and lossy modes of these algorithms. As a first step we perform an objective evaluation. Using RMSE and PSNR metrics we determine which compression algorithm performs best and this for multiple compression ratios and for several clinically relevant medical datasets. It is well known that RMSE and PSNR often do not correlate well with subjectively perceived image quality. Therefore we also perform a psycho visual analysis by means of a numerical observer. With this observer model we analyze how compression artifacts actually are perceived by a human observer. Results show superior performance of the new JP3D algorithm compared to the existing JPEG2000 algorithm.

  10. A volumetric sensor for real-time 3D mapping and robot navigation

    NASA Astrophysics Data System (ADS)

    Fournier, Jonathan; Ricard, Benoit; Laurendeau, Denis

    2006-05-01

    The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.

  11. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  12. Teat Morphology Characterization With 3D Imaging.

    PubMed

    Vesterinen, Heidi M; Corfe, Ian J; Sinkkonen, Ville; Iivanainen, Antti; Jernvall, Jukka; Laakkonen, Juha

    2015-07-01

    The objective of this study was to visualize, in a novel way, the morphological characteristics of bovine teats to gain a better understanding of the detailed teat morphology. We applied silicone casting and 3D digital imaging in order to obtain a more detailed image of the teat structures than that seen in previous studies. Teat samples from 65 dairy cows over 12 months of age were obtained from cows slaughtered at an abattoir. The teats were classified according to the teat condition scoring used in Finland and the lengths of the teat canals were measured. Silicone molds were made from the external teat surface surrounding the teat orifice and from the internal surface of the teat consisting of the papillary duct, Fürstenberg's rosette, and distal part of the teat cistern. The external and internal surface molds of 35 cows were scanned with a 3D laser scanner. The molds and the digital 3D models were used to evaluate internal and external teat surface morphology. A number of measurements were taken from the silicone molds. The 3D models reproduced the morphology of the teats accurately with high repeatability. Breed didn't correlate with the teat classification score. The rosette was found to have significant variation in its size and number of mucosal folds. The internal surface morphology of the rosette did not correlate with the external surface morphology of the teat implying that it is relatively independent of milking parameters that may impact the teat canal and the external surface of the teat. PMID:25382725

  13. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  14. Composite model of a 3-D image

    NASA Technical Reports Server (NTRS)

    Dukhovich, I. J.

    1980-01-01

    This paper presents a composite model of a moving (3-D) image especially useful for the sequential image processing and encoding. A non-linear predictor based on the composite model is described. The performance of this predictor is used as a measure of the validity of the model for a real image source. The minimization of a total mean square prediction error provides an inequality which determines a condition for the profitable use of the composite model and can serve as a decision device for the selection of the number of subsources within the model. The paper also describes statistical properties of the prediction error and contains results of computer simulation of two non-linear predictors in the case of perfect classification between subsources.

  15. [3D interactive clipping technology in medical image processing].

    PubMed

    Sun, Shaoping; Yang, Kaitai; Li, Bin; Li, Yuanjun; Liang, Jing

    2013-09-01

    The aim of this paper is to study the methods of 3D visualization and the 3D interactive clipping of CT/MRI image sequence in arbitrary orientation based on the Visualization Toolkit (VTK). A new method for 3D CT/MRI reconstructed image clipping is presented, which can clip 3D object and 3D space of medical image sequence to observe the inner structure using 3D widget for manipulating an infinite plane. Experiment results show that the proposed method can implement 3D interactive clipping of medical image effectively and get satisfied results with good quality in short time.

  16. Inverse modeling of InSAR and ground leveling data for 3D volumetric strain distribution

    NASA Astrophysics Data System (ADS)

    Gallardo, L. A.; Glowacka, E.; Sarychikhina, O.

    2015-12-01

    Wide availability of modern Interferometric Synthetic aperture Radar (InSAR) data have made possible the extensive observation of differential surface displacements and are becoming an efficient tool for the detailed monitoring of terrain subsidence associated to reservoir dynamics, volcanic deformation and active tectonism. Unfortunately, this increasing popularity has not been matched by the availability of automated codes to estimate underground deformation, since many of them still rely on trial-error subsurface model building strategies. We posit that an efficient algorithm for the volumetric modeling of differential surface displacements should match the availability of current leveling and InSAR data and have developed an algorithm for the joint inversion of ground leveling and dInSAR data in 3D. We assume the ground displacements are originated by a stress free-volume strain distribution in a homogeneous elastic media and determined the displacement field associated to an ensemble of rectangular prisms. This formulation is then used to develop a 3D conjugate gradient inversion code that searches for the three-dimensional distribution of the volumetric strains that predict InSAR and leveling surface displacements simultaneously. The algorithm is regularized applying discontinuos first and zero order Thikonov constraints. For efficiency, the resulting computational code takes advantage of the resulting convolution integral associated to the deformation field and some basic tools for multithreading parallelization. We extensively test our algorithm on leveling and InSAR test and field data of the Northwest of Mexico and compare to some feasible geological scenarios of underground deformation.

  17. Hologlyphics: volumetric image synthesis performance system

    NASA Astrophysics Data System (ADS)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  18. Volumetric display system based on three-dimensional scanning of inclined optical image.

    PubMed

    Miyazaki, Daisuke; Shiba, Kensuke; Sotsuka, Koji; Matsushita, Kenji

    2006-12-25

    A volumetric display system based on three-dimensional (3D) scanning of an inclined image is reported. An optical image of a two-dimensional (2D) display, which is a vector-scan display monitor placed obliquely in an optical imaging system, is moved laterally by a galvanometric mirror scanner. Inclined cross-sectional images of a 3D object are displayed on the 2D display in accordance with the position of the image plane to form a 3D image. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision because they are real images formed in a 3D space. Experimental results of volumetric imaging from computed-tomography images and 3D animated images are presented.

  19. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  20. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  1. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  2. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  3. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  4. Visualization and volumetric structures from MR images of the brain

    SciTech Connect

    Parvin, B.; Johnston, W.; Robertson, D.

    1994-03-01

    Pinta is a system for segmentation and visualization of anatomical structures obtained from serial sections reconstructed from magnetic resonance imaging. The system approaches the segmentation problem by assigning each volumetric region to an anatomical structure. This is accomplished by satisfying constraints at the pixel level, slice level, and volumetric level. Each slice is represented by an attributed graph, where nodes correspond to regions and links correspond to the relations between regions. These regions are obtained by grouping pixels based on similarity and proximity. The slice level attributed graphs are then coerced to form a volumetric attributed graph, where volumetric consistency can be verified. The main novelty of our approach is in the use of the volumetric graph to ensure consistency from symbolic representations obtained from individual slices. In this fashion, the system allows errors to be made at the slice level, yet removes them when the volumetric consistency cannot be verified. Once the segmentation is complete, the 3D surfaces of the brain can be constructed and visualized.

  5. JPEG2000 Part 10: volumetric imaging

    NASA Astrophysics Data System (ADS)

    Schelkens, Peter; Brislawn, Christopher M.; Barbarien, Joeri; Munteanu, Adrian; Cornelis, Jan P.

    2003-11-01

    Recently, the JPEG2000 committee (ISO/IEC JTC1/SC29/WG1) decided to start up a new standardization activity to support the encoding of volumetric and floating-point data sets: Part 10 - Coding Volumetric and Floating-point Data (JP3D). This future standard will support functionalities like resolution and quality scalability and region-of-interest coding, while exploiting the entropy in the additional third dimension to improve the rate-distortion performance. In this paper, we give an overview of the markets and application areas targeted by JP3D, the imposed requirements and the considered algorithms with a specific focus on the realization of the region-of-interest functionality.

  6. Super deep 3D images from a 3D omnifocus video camera.

    PubMed

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  7. Robust volumetric change detection using mutual information with 3D fractals

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Akbari, Morris; Henning, Ronda; Pokorny, John

    2014-06-01

    We discuss a robust method for quantifying change of multi-temporal remote sensing point data in the presence of affine registration errors. Three dimensional image processing algorithms can be used to extract and model an electronic module, consisting of a self-contained assembly of electronic components and circuitry, using an ultrasound scanning sensor. Mutual information (MI) is an effective measure of change. We propose a multi-resolution 3D fractal algorithm which is a novel extension to MI or regional mutual information (RMI). Our method is called fractal mutual information (FMI). This extension efficiently takes neighborhood fractal patterns of corresponding voxels (3D pixels) into account. The goal of this system is to quantify the change in a module due to tampering and provide a method for quantitative and qualitative change detection and analysis.

  8. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  9. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  10. Volumetric segmentation of range images for printed circuit board inspection

    NASA Astrophysics Data System (ADS)

    Van Dop, Erik R.; Regtien, Paul P. L.

    1996-10-01

    Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.

  11. 3D seismic imaging, example of 3D area in the middle of Banat

    NASA Astrophysics Data System (ADS)

    Antic, S.

    2009-04-01

    3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells in this area but they are not enough deep to help in the interpretation. It was necessary to get better seismic image in deeper area. Acquisition parameters were satisfactory (good quality of input parameters, length of input data was 5 s, fold was up to 4000 %) and preprocessed data was satisfied. GeoDepth is an integrated system for 3D velocity model building and for 3D seismic imaging. Input data for 3D seismic imaging consist of preprocessing data sorted to CMP gathers and RMS stacking velocity functions. Other type of input data are geological information derived from well data, time migrated images and time migrated maps. Workflow for this job was: loading and quality control the input data (CMP gathers and velocity), creating initial RMS Velocity Volume, PSTM, updating the RMS Velocity Volume, PSTM, building the Initial Interval Velocity Model, PSDM, updating the Interval Velocity Model, PSDM. In the first stage the attempt is to derive initial velocity model as simple as possible as.The higher frequency velocity changes are obtained in the updating stage. The next step, after running PSTM, is the time to depth conversion. After the model is built, we generate a 3D interval velocity volume and run 3D pre-stack depth migration. The main method for updating velocities is 3D tomography. The criteria used in velocity model determination are based on the flatness of pre-stack migrated gathers or the quality of the stacked image. The standard processing ended with poststack 3D time migration. Prestack depth migration is one of the powerful tool available to the interpretator to develop an accurate velocity model and get

  12. Real-time cylindrical curvilinear 3-D ultrasound imaging.

    PubMed

    Pua, E C; Yen, J T; Smith, S W

    2003-07-01

    In patients who are obese or exhibit signs of pulmonary disease, standard transthoracic scanning may yield poor quality cardiac images. For these conditions, two-dimensional transesophageal echocardiography (TEE) is established as an essential diagnostic tool. Current techniques in transesophageal scanning, though, are limited by incomplete visualization of cardiac structures in close proximity to the transducer. Thus, we propose a 2D curvilinear array for 3D transesophageal echocardiography in order to widen the field of view and increase visualization close to the transducer face. In this project, a 440 channel 5 MHz two-dimensional array with a 12.6 mm aperture diameter on a flexible interconnect circuit has been molded to a 4 mm radius of curvature. A 75% element yield was achieved during fabrication and an average -6dB bandwidth of 30% was observed in pulse-echo tests. Using this transducer in conjunction with modifications to the beam former delay software and scan converter display software of the our 3D scanner, we obtained cylindrical real-time curvilinear volumetric scans of tissue phantoms, including a field of view of greater than 120 degrees in the curved, azimuth direction and 65 degrees phased array sector scans in the elevation direction. These images were achieved using a stepped subaperture across the cylindrical curvilinear direction of the transducer face and phased array sector scanning in the noncurved plane. In addition, real-time volume rendered images of a tissue mimicking phantom with holes ranging from 1 cm to less than 4 mm have been obtained. 3D color flow Doppler results have also been acquired. This configuration can theoretically achieve volumes displaying 180 degrees by 120 degrees. The transducer is also capable of obtaining images through a curvilinear stepped subaperture in azimuth in conjunction with a rectilinear stepped subaperture in elevation, further increasing the field of view close to the transducer face. Future work

  13. 3D weighting in cone beam image reconstruction algorithms: ray-driven vs. pixel-driven.

    PubMed

    Tang, Xiangyang; Nilsen, Roy A; Smolin, Alex; Lifland, Ilya; Samsonov, Dmitry; Taha, Basel

    2008-01-01

    A 3D weighting scheme have been proposed previously to reconstruct images at both helical and axial scans in stat-of-the-art volumetric CT scanners for diagnostic imaging. Such a 3D weighting can be implemented in the manner of either ray-driven or pixel-drive, depending on the available computation resources. An experimental study is conducted in this paper to evaluate the difference between the ray-driven and pixel-driven implementations of the 3D weighting from the perspective of image quality, while their computational complexity is analyzed theoretically. Computer simulated data and several phantoms, such as the helical body phantom and humanoid chest phantom, are employed in the experimental study, showing that both the ray-driven and pixel-driven 3D weighting provides superior image quality for diagnostic imaging in clinical applications. With the availability of image reconstruction engine at increasing computational power, it is believed that the pixel-driven 3D weighting will be dominantly employed in state-of-the-art volumetric CT scanners over clinical applications.

  14. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  15. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  16. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  17. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  18. Dynamic contrast-enhanced 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  19. Fringe projection 3D microscopy with the general imaging model.

    PubMed

    Yin, Yongkai; Wang, Meng; Gao, Bruce Z; Liu, Xiaoli; Peng, Xiang

    2015-03-01

    Three-dimensional (3D) imaging and metrology of microstructures is a critical task for the design, fabrication, and inspection of microelements. Newly developed fringe projection 3D microscopy is presented in this paper. The system is configured according to camera-projector layout and long working distance lenses. The Scheimpflug principle is employed to make full use of the limited depth of field. For such a specific system, the general imaging model is introduced to reach a full 3D reconstruction. A dedicated calibration procedure is developed to realize quantitative 3D imaging. Experiments with a prototype demonstrate the accessibility of the proposed configuration, model, and calibration approach.

  20. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  1. Whole-cell, multicolor superresolution imaging using volumetric multifocus microscopy

    PubMed Central

    Hajj, Bassam; Wisniewski, Jan; El Beheiry, Mohamed; Chen, Jiji; Revyakin, Andrey; Wu, Carl; Dahan, Maxime

    2014-01-01

    Single molecule-based superresolution imaging has become an essential tool in modern cell biology. Because of the limited depth of field of optical imaging systems, one of the major challenges in superresolution imaging resides in capturing the 3D nanoscale morphology of the whole cell. Despite many previous attempts to extend the application of photo-activated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) techniques into three dimensions, effective localization depths do not typically exceed 1.2 µm. Thus, 3D imaging of whole cells (or even large organelles) still demands sequential acquisition at different axial positions and, therefore, suffers from the combined effects of out-of-focus molecule activation (increased background) and bleaching (loss of detections). Here, we present the use of multifocus microscopy for volumetric multicolor superresolution imaging. By simultaneously imaging nine different focal planes, the multifocus microscope instantaneously captures the distribution of single molecules (either fluorescent proteins or synthetic dyes) throughout an ∼4-µm-deep volume, with lateral and axial localization precisions of ∼20 and 50 nm, respectively. The capabilities of multifocus microscopy to rapidly image the 3D organization of intracellular structures are illustrated by superresolution imaging of the mammalian mitochondrial network and yeast microtubules during cell division. PMID:25422417

  2. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  3. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  4. Determining 3D flow fields via multi-camera light field imaging.

    PubMed

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-03-06

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

  5. Determining 3D Flow Fields via Multi-camera Light Field Imaging

    PubMed Central

    Truscott, Tadd T.; Belden, Jesse; Nielson, Joseph R.; Daily, David J.; Thomson, Scott L.

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  6. Handheld Real-Time Volumetric Imaging of The Spine: Technology Development

    PubMed Central

    Tiouririne, Mohamed; Nguyen, Sarah; Hossack, John A.; Owen, Kevin; Mauldin, F. William

    2014-01-01

    Technical difficulties, poor image quality and reliance on pattern identifications represent some of the drawbacks of two-dimensional ultrasound imaging of spinal bone anatomy. To overcome these limitations, we sought to develop real-time volumetric imaging of the spine using a portable handheld device. The device measured 19.2 cm x 9.2 cm x 9.0 cm and imaged at 5 MHz center frequency. 2D imaging under conventional ultrasound and volumetric (3D) imaging in real time was achieved and verified by inspection using a custom spine phantom. Further device performance was assessed and revealed a 75-minute battery life and average frame rate of 17.7 Hz in volumetric imaging mode. Our results suggest that real-time volumetric imaging of the spine is a feasible technique for more intuitive visualization of the spine. These results may have important ramifications for a large array of neuraxial procedures. PMID:24446802

  7. Dual-view 3D displays based on integral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Deng, Huan; Wu, Fei

    2016-03-01

    We propose three dual-view integral imaging (DVII) three-dimensional (3D) displays. In the spatial-multiplexed DVII 3D display, each elemental image (EI) is cut into a left and right sub-EIs, and they are refracted to the left and right viewing zones by the corresponding micro-lens array (MLA). Different 3D images are reconstructed in the left and right viewing zones, and the viewing angle is decreased. In the DVII 3D display using polarizer parallax barriers, a polarizer parallax barrier is used in front of both the display panel and the MLA. The polarizer parallax barrier consists of two parts with perpendicular polarization directions. The elemental image array (EIA) is cut to left and right parts. The lights emitted from the left part are modulated by the left MLA and reconstruct a 3D image in the right viewing zone, whereas the lights emitted from the right part reconstruct another 3D image in the left viewing zone. The 3D resolution is decreased. In the time-multiplexed DVII 3D display, an orthogonal polarizer array is attached onto both the display panel and the MLA. The orthogonal polarizer array consists of horizontal and vertical polarizer units and the polarization directions of the adjacent units are orthogonal. In State 1, each EI is reconstructed by its corresponding micro-lens, whereas in State 2, each EI is reconstructed by its adjacent micro-lens. 3D images 1 and 2 are reconstructed alternately with a refresh rate up to 120HZ. The viewing angle and 3D resolution are the same as the conventional II 3D display.

  8. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  9. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  10. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  11. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  12. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  13. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  14. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  15. 3-D seismic imaging of complex geologies

    NASA Astrophysics Data System (ADS)

    Womble, David E.; Dosanjh, Sudip S.; Vandyke, John P.; Oldfield, Ron A.; Greenberg, David S.

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  16. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-01

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging. PMID:25836861

  17. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  18. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  19. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  20. Floating volumetric image formation using a dihedral corner reflector array device.

    PubMed

    Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuki; Yamamoto, Siori; Mukai, Takaaki; Maekawa, Satoshi

    2013-01-01

    A volumetric display system using an optical imaging device consisting of numerous dihedral corner reflectors placed perpendicular to the surface of a metal plate is proposed. Image formation by the dihedral corner reflector array (DCRA) is free from distortion and focal length. In the proposed volumetric display system, a two-dimensional real image is moved by a mirror scanner to scan a three-dimensional (3D) space. Cross-sectional images of a 3D object are displayed in accordance with the position of the image plane. A volumetric image is observed as a stack of the cross-sectional images. The use of the DCRA brings compact system configuration and volumetric real image generation with very low distortion. An experimental volumetric display system including a DCRA, a galvanometer mirror, and a digital micro-mirror device was constructed to verify the proposed method. A volumetric image consisting of 1024×768×400 voxels was formed by the experimental system. PMID:23292404

  1. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  2. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  3. Formation of 3D structures in a volumetric photocurable material via a holographic method

    NASA Astrophysics Data System (ADS)

    Vorzobova, N. D.; Bulgakova, V. G.; Veselov, V. O.

    2015-12-01

    The principle of forming 3D polymer structures is considered, based on the display of the 3D intensity distribution of radiation formed by a hologram in the bulk of a photocurable material. The conditions are determined for limiting the cure depth and reproducing the projected wavefront configuration.

  4. Pseudo-3D Imaging With The DICOM-8

    NASA Astrophysics Data System (ADS)

    Shalev, S.; Arenson, J.; Kettner, B.

    1985-09-01

    We have developed the DICOM.-8 digital imaging computer for video image acquisition, processing and display. It is a low-cost mobile systems based on a Z80 microcomputer which controls access to two 512 x 512 x 8-bit image planes through a real-time video arithmetic unit. Image presentation capabilities include orthographic images, isometric plots with hidden-line suppression, real-time mask subtraction, binocular red/green stereo, and volumetric imaging with both geometrical and density windows under operator interactive control. Examples are shown for multiplane series of CT images.

  5. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  6. Volumetric retinal fluorescence microscopic imaging with extended depth of field

    NASA Astrophysics Data System (ADS)

    Li, Zengzhuo; Fischer, Andrew; Li, Wei; Li, Guoqiang

    2016-03-01

    Wavefront-engineered microscope with greatly extended depth of field (EDoF) is designed and demonstrated for volumetric imaging with near-diffraction limited optical performance. A bright field infinity-corrected transmissive/reflective light microscope is built with Kohler illumination. A home-made phase mask is placed in between the objective lens and the tube lens for ease of use. General polynomial function is adopted in the design of the phase plate for robustness and custom merit function is used in Zemax for optimization. The resulting EDoF system achieves an engineered point spread function (PSF) that is much less sensitive to object depth variation than conventional systems and therefore 3D volumetric information can be acquired in a single frame with expanded tolerance of defocus. In Zemax simulation for a setup using 32X objective (NA = 0.6), the EDoF is 20μm whereas a conventional one has a DoF of 1.5μm, indicating a 13 times increase. In experiment, a 20X objective lens with NA = 0.4 was used and the corresponding phase plate was designed and fabricated. Retinal fluorescence images of the EDoF microscope using passive adaptive optical phase element illustrate a DoF around 100μm and it is able to recover the volumetric fluorescence images that are almost identical to in-focus images after post processing. The image obtained from the EDoF microscope is also better in resolution and contrast, and the retinal structure is better defined. Hence, due to its high tolerance of defocus and fine restored image quality, EDoF optical systems have promising potential in consumer portable medical imaging devices where user's ability to achieve focus is not optimal, and other medical imaging equipment where achieving best focus is not a necessary.

  7. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  8. Fast iterative image reconstruction of 3D PET data

    SciTech Connect

    Kinahan, P.E.; Townsend, D.W.; Michel, C.

    1996-12-31

    For count-limited PET imaging protocols, two different approaches to reducing statistical noise are volume, or 3D, imaging to increase sensitivity, and statistical reconstruction methods to reduce noise propagation. These two approaches have largely been developed independently, likely due to the perception of the large computational demands of iterative 3D reconstruction methods. We present results of combining the sensitivity of 3D PET imaging with the noise reduction and reconstruction speed of 2D iterative image reconstruction methods. This combination is made possible by using the recently-developed Fourier rebinning technique (FORE), which accurately and noiselessly rebins 3D PET data into a 2D data set. The resulting 2D sinograms are then reconstructed independently by the ordered-subset EM (OSEM) iterative reconstruction method, although any other 2D reconstruction algorithm could be used. We demonstrate significant improvements in image quality for whole-body 3D PET scans by using the FORE+OSEM approach compared with the standard 3D Reprojection (3DRP) algorithm. In addition, the FORE+OSEM approach involves only 2D reconstruction and it therefore requires considerably less reconstruction time than the 3DRP algorithm, or any fully 3D statistical reconstruction algorithm.

  9. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  10. Texture-based visualization of unsteady 3D flow by real-time advection and volumetric illumination.

    PubMed

    Weiskopf, Daniel; Schafhitzel, Tobias; Ertl, Thomas

    2007-01-01

    This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach.

  11. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  12. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy

    SciTech Connect

    Tsai, Tsung-Yuan; Lu, Tung-Wu; Chen, Chung-Ming; Kuo, Mei-Ying; Hsu, Horng-Chaung

    2010-03-15

    Purpose: Accurate measurement of the three-dimensional (3D) rigid body and surface kinematics of the natural human knee is essential for many clinical applications. Existing techniques are limited either in their accuracy or lack more realistic experimental evaluation of the measurement errors. The purposes of the study were to develop a volumetric model-based 2D to 3D registration method, called the weighted edge-matching score (WEMS) method, for measuring natural knee kinematics with single-plane fluoroscopy to determine experimentally the measurement errors and to compare its performance with that of pattern intensity (PI) and gradient difference (GD) methods. Methods: The WEMS method gives higher priority to matching of longer edges of the digitally reconstructed radiograph and fluoroscopic images. The measurement errors of the methods were evaluated based on a human cadaveric knee at 11 flexion positions. Results: The accuracy of the WEMS method was determined experimentally to be less than 0.77 mm for the in-plane translations, 3.06 mm for out-of-plane translation, and 1.13 deg. for all rotations, which is better than that of the PI and GD methods. Conclusions: A new volumetric model-based 2D to 3D registration method has been developed for measuring 3D in vivo kinematics of natural knee joints with single-plane fluoroscopy. With the equipment used in the current study, the accuracy of the WEMS method is considered acceptable for the measurement of the 3D kinematics of the natural knee in clinical applications.

  13. Imaging and 3D morphological analysis of collagen fibrils.

    PubMed

    Altendorf, H; Decencière, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

    2012-08-01

    The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants.

  14. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models.

    PubMed

    Dhou, S; Hurwitz, M; Mishra, P; Cai, W; Rottmann, J; Li, R; Williams, C; Wagar, M; Berbeco, R; Ionascu, D; Lewis, J H

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  15. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  16. All Photons Imaging Through Volumetric Scattering

    NASA Astrophysics Data System (ADS)

    Satat, Guy; Heshmat, Barmak; Raviv, Dan; Raskar, Ramesh

    2016-09-01

    Imaging through thick highly scattering media (sample thickness ≫ mean free path) can realize broad applications in biomedical and industrial imaging as well as remote sensing. Here we propose a computational “All Photons Imaging” (API) framework that utilizes time-resolved measurement for imaging through thick volumetric scattering by using both early arrived (non-scattered) and diffused photons. As opposed to other methods which aim to lock on specific photons (coherent, ballistic, acoustically modulated, etc.), this framework aims to use all of the optical signal. Compared to conventional early photon measurements for imaging through a 15 mm tissue phantom, our method shows a two fold improvement in spatial resolution (4db increase in Peak SNR). This all optical, calibration-free framework enables widefield imaging through thick turbid media, and opens new avenues in non-invasive testing, analysis, and diagnosis.

  17. All Photons Imaging Through Volumetric Scattering

    PubMed Central

    Satat, Guy; Heshmat, Barmak; Raviv, Dan; Raskar, Ramesh

    2016-01-01

    Imaging through thick highly scattering media (sample thickness ≫ mean free path) can realize broad applications in biomedical and industrial imaging as well as remote sensing. Here we propose a computational “All Photons Imaging” (API) framework that utilizes time-resolved measurement for imaging through thick volumetric scattering by using both early arrived (non-scattered) and diffused photons. As opposed to other methods which aim to lock on specific photons (coherent, ballistic, acoustically modulated, etc.), this framework aims to use all of the optical signal. Compared to conventional early photon measurements for imaging through a 15 mm tissue phantom, our method shows a two fold improvement in spatial resolution (4db increase in Peak SNR). This all optical, calibration-free framework enables widefield imaging through thick turbid media, and opens new avenues in non-invasive testing, analysis, and diagnosis. PMID:27683065

  18. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration

    NASA Astrophysics Data System (ADS)

    Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

    2013-11-01

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795).

  19. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  20. 3D elemental sensitive imaging by full-field XFCT.

    PubMed

    Deng, Biao; Du, Guohao; Zhou, Guangzhao; Wang, Yudan; Ren, Yuqi; Chen, Rongchang; Sun, Pengfei; Xie, Honglan; Xiao, Tiqiao

    2015-05-21

    X-ray fluorescence computed tomography (XFCT) is a stimulated emission tomography modality that maps the three-dimensional (3D) distribution of elements. Generally, XFCT is done by scanning a pencil-beam across the sample. This paper presents a feasibility study of full-field XFCT (FF-XFCT) for 3D elemental imaging. The FF-XFCT consists of a pinhole collimator and X-ray imaging detector with no energy resolution. A prototype imaging system was set up at the Shanghai Synchrotron Radiation Facility (SSRF) for imaging the phantom. The first FF-XFCT experimental results are presented. The cadmium (Cd) and iodine (I) distributions were reconstructed. The results demonstrate FF-XFCT is fit for 3D elemental imaging and the sensitivity of FF-XFCT is higher than a conventional CT system.

  1. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera

  2. Automatic 3D lesion segmentation on breast ultrasound images

    NASA Astrophysics Data System (ADS)

    Kuo, Hsien-Chi; Giger, Maryellen L.; Reiser, Ingrid; Drukker, Karen; Edwards, Alexandra; Sennett, Charlene A.

    2013-02-01

    Automatically acquired and reconstructed 3D breast ultrasound images allow radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in 3D ultrasound can be difficult and time consuming. In this study, we evaluate a 3D lesion segmentation method, which we had previously developed for breast CT, and investigate its robustness on lesions on 3D breast ultrasound images. Our dataset includes 98 3D breast ultrasound images obtained on an ABUS system from 55 patients containing 64 cancers. Cancers depicted on 54 US images had been clinically interpreted as negative on screening mammography and 44 had been clinically visible on mammography. All were from women with breast density BI-RADS 3 or 4. Tumor centers and margins were indicated and outlined by radiologists. Initial RGI-eroded contours were automatically calculated and served as input to the active contour segmentation algorithm yielding the final lesion contour. Tumor segmentation was evaluated by determining the overlap ratio (OR) between computer-determined and manually-drawn outlines. Resulting average overlap ratios on coronal, transverse, and sagittal views were 0.60 +/- 0.17, 0.57 +/- 0.18, and 0.58 +/- 0.17, respectively. All OR values were significantly higher the 0.4, which is deemed "acceptable". Within the groups of mammogram-negative and mammogram-positive cancers, the overlap ratios were 0.63 +/- 0.17 and 0.56 +/- 0.16, respectively, on the coronal views; with similar results on the other views. The segmentation performance was not found to be correlated to tumor size. Results indicate robustness of the 3D lesion segmentation technique in multi-modality 3D breast imaging.

  3. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  4. 3D stereophotogrammetric image superimposition onto 3D CT scan images: the future of orthognathic surgery. A pilot study.

    PubMed

    Khambay, Balvinder; Nebel, Jean-Christophe; Bowman, Janet; Walker, Fraser; Hadley, Donald M; Ayoub, Ashraf

    2002-01-01

    The aim of this study was to register and assess the accuracy of the superimposition method of a 3-dimensional (3D) soft tissue stereophotogrammetric image (C3D image) and a 3D image of the underlying skeletal tissue acquired by 3D spiral computerized tomography (CT). The study was conducted on a model head, in which an intact human skull was embedded with an overlying latex mask that reproduced anatomic features of a human face. Ten artificial radiopaque landmarks were secured to the surface of the latex mask. A stereophotogrammetric image of the mask and a 3D spiral CT image of the model head were captured. The C3D image and the CT images were registered for superimposition by 3 different methods: Procrustes superimposition using artificial landmarks, Procrustes analysis using anatomic landmarks, and partial Procrustes analysis using anatomic landmarks and then registration completion by HICP (a modified Iterative Closest Point algorithm) using a specified region of both images. The results showed that Procrustes superimposition using the artificial landmarks produced an error of superimposition on the order of 10 mm. Procrustes analysis using anatomic landmarks produced an error in the order of 2 mm. Partial Procrustes analysis using anatomic landmarks followed by HICP produced a superimposition accuracy of between 1.25 and 1.5 mm. It was concluded that a stereophotogrammetric and a 3D spiral CT scan image can be superimposed with an accuracy of between 1.25 and 1.5 mm using partial Procrustes analysis based on anatomic landmarks and then registration completion by HICP.

  5. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  6. Evaluating 3D registration of CT-scan images using crest lines

    NASA Astrophysics Data System (ADS)

    Ayache, Nicholas; Gueziec, Andre P.; Thirion, Jean-Philippe; Gourdon, A.; Knoplioch, Jerome

    1993-06-01

    We consider the issue of matching 3D objects extracted from medical images. We show that crest lines computed on the object surfaces correspond to meaningful anatomical features, and that they are stable with respect to rigid transformations. We present the current chain of algorithmic modules which automatically extract the major crest lines in 3D CT-Scan images, and then use differential invariants on these lines to register together the 3D images with a high precision. The extraction of the crest lines is done by computing up to third order derivatives of the image intensity function with appropriate 3D filtering of the volumetric images, and by the 'marching lines' algorithm. The recovered lines are then approximated by splines curves, to compute at each point a number of differential invariants. Matching is finally performed by a new geometric hashing method. The whole chain is now completely automatic, and provides extremely robust and accurate results, even in the presence of severe occlusions. In this paper, we briefly describe the whole chain of processes, already presented to evaluate the accuracy of the approach on a couple of CT-scan images of a skull containing external markers.

  7. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  8. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  9. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  10. 3D modeling from uncalibrated color images for a complete wound assessment tool.

    PubMed

    Albouy, B; Lucas, Y; Treuillet, S

    2007-01-01

    This paper is concerned with the 3D modeling of skin wound using uncalibrated vision techniques for the volumetric assessment of the healing process. We have developed an original approach for matching two color images captured with a free-handled digital camera and generate a semi-dense 3D model. We evaluate the precision of the inferred 3D model by registration to a ground truth on artificial wounds. The method is then applied to volumetric measurements. The clinician requirements of a global 5% precision are overshot as 3% is obtained locally. The best configuration for taking photos lies between 1.2 and 1.5 for distance ratios and between 15 degrees and 30 degrees for vergence of the stereo pair. This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool using a simple free handled digital camera: a smart solution for massive diffusion in care centers as such very low cost system should be operated directly by nurses.

  11. 3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-12-01

    Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images.

  12. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  13. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  14. MMSE Reconstruction for 3D Freehand Ultrasound Imaging

    PubMed Central

    Huang, Wei; Zheng, Yibin

    2008-01-01

    The reconstruction of 3D ultrasound (US) images from mechanically registered, but otherwise irregularly positioned, B-scan slices is of great interest in image guided therapy procedures. Conventional 3D ultrasound algorithms have low computational complexity, but the reconstructed volume suffers from severe speckle contamination. Furthermore, the current method cannot reconstruct uniform high-resolution data from several low-resolution B-scans. In this paper, the minimum mean-squared error (MMSE) method is applied to 3D ultrasound reconstruction. Data redundancies due to overlapping samples as well as correlation of the target and speckle are naturally accounted for in the MMSE reconstruction algorithm. Thus, the reconstruction process unifies the interpolation and spatial compounding. Simulation results for synthetic US images are presented to demonstrate the excellent reconstruction. PMID:18382623

  15. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  16. Single 3D cell segmentation from optical CT microscope images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Reeves, Anthony P.

    2014-03-01

    The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

  17. 3D non-rigid surface-based MR-TRUS registration for image-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Sun, Yue; Qiu, Wu; Romagnoli, Cesare; Fenster, Aaron

    2014-03-01

    Two dimensional (2D) transrectal ultrasound (TRUS) guided prostate biopsy is the standard approach for definitive diagnosis of prostate cancer (PCa). However, due to the lack of image contrast of prostate tumors needed to clearly visualize early-stage PCa, prostate biopsy often results in false negatives, requiring repeat biopsies. Magnetic Resonance Imaging (MRI) has been considered to be a promising imaging modality for noninvasive identification of PCa, since it can provide a high sensitivity and specificity for the detection of early stage PCa. Our main objective is to develop and validate a registration method of 3D MR-TRUS images, allowing generation of volumetric 3D maps of targets identified in 3D MR images to be biopsied using 3D TRUS images. Our registration method first makes use of an initial rigid registration of 3D MR images to 3D TRUS images using 6 manually placed approximately corresponding landmarks in each image. Following the manual initialization, two prostate surfaces are segmented from 3D MR and TRUS images and then non-rigidly registered using a thin-plate spline (TPS) algorithm. The registration accuracy was evaluated using 4 patient images by measuring target registration error (TRE) of manually identified corresponding intrinsic fiducials (calcifications and/or cysts) in the prostates. Experimental results show that the proposed method yielded an overall mean TRE of 2.05 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm.

  18. Optimized Bayes variational regularization prior for 3D PET images.

    PubMed

    Rapisarda, Eugenio; Presotto, Luca; De Bernardi, Elisabetta; Gilardi, Maria Carla; Bettinardi, Valentino

    2014-09-01

    A new prior for variational Maximum a Posteriori regularization is proposed to be used in a 3D One-Step-Late (OSL) reconstruction algorithm accounting also for the Point Spread Function (PSF) of the PET system. The new regularization prior strongly smoothes background regions, while preserving transitions. A detectability index is proposed to optimize the prior. The new algorithm has been compared with different reconstruction algorithms such as 3D-OSEM+PSF, 3D-OSEM+PSF+post-filtering and 3D-OSL with a Gauss-Total Variation (GTV) prior. The proposed regularization allows controlling noise, while maintaining good signal recovery; compared to the other algorithms it demonstrates a very good compromise between an improved quantitation and good image quality. PMID:24958594

  19. Breast density measurement: 3D cone beam computed tomography (CBCT) images versus 2D digital mammograms

    NASA Astrophysics Data System (ADS)

    Han, Tao; Lai, Chao-Jen; Chen, Lingyun; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Yang, Wei T.; Shaw, Chris C.

    2009-02-01

    Breast density has been recognized as one of the major risk factors for breast cancer. However, breast density is currently estimated using mammograms which are intrinsically 2D in nature and cannot accurately represent the real breast anatomy. In this study, a novel technique for measuring breast density based on the segmentation of 3D cone beam CT (CBCT) images was developed and the results were compared to those obtained from 2D digital mammograms. 16 mastectomy breast specimens were imaged with a bench top flat-panel based CBCT system. The reconstructed 3D CT images were corrected for the cupping artifacts and then filtered to reduce the noise level, followed by using threshold-based segmentation to separate the dense tissue from the adipose tissue. For each breast specimen, volumes of the dense tissue structures and the entire breast were computed and used to calculate the volumetric breast density. BI-RADS categories were derived from the measured breast densities and compared with those estimated from conventional digital mammograms. The results show that in 10 of 16 cases the BI-RADS categories derived from the CBCT images were lower than those derived from the mammograms by one category. Thus, breasts considered as dense in mammographic examinations may not be considered as dense with the CBCT images. This result indicates that the relation between breast cancer risk and true (volumetric) breast density needs to be further investigated.

  20. Segmentation and visualization of anatomical structures from volumetric medical images

    NASA Astrophysics Data System (ADS)

    Park, Jonghyun; Park, Soonyoung; Cho, Wanhyun; Kim, Sunworl; Kim, Gisoo; Ahn, Gukdong; Lee, Myungeun; Lim, Junsik

    2011-03-01

    This paper presents a method that can extract and visualize anatomical structures from volumetric medical images by using a 3D level set segmentation method and a hybrid volume rendering technique. First, the segmentation using the level set method was conducted through a surface evolution framework based on the geometric variation principle. This approach addresses the topological changes in the deformable surface by using the geometric integral measures and level set theory. These integral measures contain a robust alignment term, an active region term, and a mean curvature term. By using the level set method with a new hybrid speed function derived from the geometric integral measures, the accurate deformable surface can be extracted from a volumetric medical data set. Second, we employed a hybrid volume rendering approach to visualize the extracted deformable structures. Our method combines indirect and direct volume rendering techniques. Segmented objects within the data set are rendered locally by surface rendering on an object-by-object basis. Globally, all the results of subsequent object rendering are obtained by direct volume rendering (DVR). Then the two rendered results are finally combined in a merging step. This is especially useful when inner structures should be visualized together with semi-transparent outer parts. This merging step is similar to the focus-plus-context approach known from information visualization. Finally, we verified the accuracy and robustness of the proposed segmentation method for various medical volume images. The volume rendering results of segmented 3D objects show that our proposed method can accurately extract and visualize human organs from various multimodality medical volume images.

  1. Oxygen- and Nitrogen-Enriched 3D Porous Carbon for Supercapacitors of High Volumetric Capacity.

    PubMed

    Li, Jia; Liu, Kang; Gao, Xiang; Yao, Bin; Huo, Kaifu; Cheng, Yongliang; Cheng, Xiaofeng; Chen, Dongchang; Wang, Bo; Sun, Wanmei; Ding, Dong; Liu, Meilin; Huang, Liang

    2015-11-11

    Efficient utilization and broader commercialization of alternative energies (e.g., solar, wind, and geothermal) hinges on the performance and cost of energy storage and conversion systems. For now and in the foreseeable future, the combination of rechargeable batteries and electrochemical capacitors remains the most promising option for many energy storage applications. Porous carbonaceous materials have been widely used as an electrode for batteries and supercapacitors. To date, however, the highest specific capacitance of an electrochemical double layer capacitor is only ∼200 F/g, although a wide variety of synthetic approaches have been explored in creating optimized porous structures. Here, we report our findings in the synthesis of porous carbon through a simple, one-step process: direct carbonization of kelp in an NH3 atmosphere at 700 °C. The resulting oxygen- and nitrogen-enriched carbon has a three-dimensional structure with specific surface area greater than 1000 m(2)/g. When evaluated as an electrode for electrochemical double layer capacitors, the porous carbon structure demonstrated excellent volumetric capacitance (>360 F/cm(3)) with excellent cycling stability. This simple approach to low-cost carbonaceous materials with unique architecture and functionality could be a promising alternative to fabrication of porous carbon structures for many practical applications, including batteries and fuel cells.

  2. Oxygen- and Nitrogen-Enriched 3D Porous Carbon for Supercapacitors of High Volumetric Capacity.

    PubMed

    Li, Jia; Liu, Kang; Gao, Xiang; Yao, Bin; Huo, Kaifu; Cheng, Yongliang; Cheng, Xiaofeng; Chen, Dongchang; Wang, Bo; Sun, Wanmei; Ding, Dong; Liu, Meilin; Huang, Liang

    2015-11-11

    Efficient utilization and broader commercialization of alternative energies (e.g., solar, wind, and geothermal) hinges on the performance and cost of energy storage and conversion systems. For now and in the foreseeable future, the combination of rechargeable batteries and electrochemical capacitors remains the most promising option for many energy storage applications. Porous carbonaceous materials have been widely used as an electrode for batteries and supercapacitors. To date, however, the highest specific capacitance of an electrochemical double layer capacitor is only ∼200 F/g, although a wide variety of synthetic approaches have been explored in creating optimized porous structures. Here, we report our findings in the synthesis of porous carbon through a simple, one-step process: direct carbonization of kelp in an NH3 atmosphere at 700 °C. The resulting oxygen- and nitrogen-enriched carbon has a three-dimensional structure with specific surface area greater than 1000 m(2)/g. When evaluated as an electrode for electrochemical double layer capacitors, the porous carbon structure demonstrated excellent volumetric capacitance (>360 F/cm(3)) with excellent cycling stability. This simple approach to low-cost carbonaceous materials with unique architecture and functionality could be a promising alternative to fabrication of porous carbon structures for many practical applications, including batteries and fuel cells. PMID:26477268

  3. Volumetric imaging system for the ionosphere (VISION)

    NASA Astrophysics Data System (ADS)

    Dymond, Kenneth F.; Budzien, Scott A.; Nicholas, Andrew C.; Thonnard, Stefan E.; Fortna, Clyde B.

    2002-01-01

    The Volumetric Imaging System for the Ionosphere (VISION) is designed to use limb and nadir images to reconstruct the three-dimensional distribution of electrons over a 1000 km wide by 500 km high slab beneath the satellite with 10 km x 10 km x 10 km voxels. The primary goal of the VISION is to map and monitor global and mesoscale (> 10 km) electron density structures, such as the Appleton anomalies and field-aligned irregularity structures. The VISION consists of three UV limb imagers, two UV nadir imagers, a dual frequency Global Positioning System (GPS) receiver, and a coherently emitting three frequency radio beacon. The limb imagers will observe the O II 83.4 nm line (daytime electron density), O I 135.6 nm line (nighttime electron density and daytime O density), and the N2 Lyman-Birge-Hopfield (LBH) bands near 143.0 nm (daytime N2 density). The nadir imagers will observe the O I 135.6 nm line (nighttime electron density and daytime O density) and the N2 LBH bands near 143.0 nm (daytime N2 density). The GPS receiver will monitor the total electron content between the satellite containing the VISION and the GPS constellation. The three frequency radio beacon will be used with ground-based receiver chains to perform computerized radio tomography below the satellite containing the VISION. The measurements made using the two radio frequency instruments will be used to validate the VISION UV measurements.

  4. A miniature high resolution 3-D imaging sonar.

    PubMed

    Josserand, Tim; Wolley, Jason

    2011-04-01

    This paper discusses the design and development of a miniature, high resolution 3-D imaging sonar. The design utilizes frequency steered phased arrays (FSPA) technology. FSPAs present a small, low-power solution to the problem of underwater imaging sonars. The technology provides a method to build sonars with a large number of beams without the proportional power, circuitry and processing complexity. The design differs from previous methods in that the array elements are manufactured from a monolithic material. With this technique the arrays are flat and considerably smaller element dimensions are achievable which allows for higher frequency ranges and smaller array sizes. In the current frequency range, the demonstrated array has ultra high image resolution (1″ range×1° azimuth×1° elevation) and small size (<3″×3″). The design of the FSPA utilizes the phasing-induced frequency-dependent directionality of a linear phased array to produce multiple beams in a forward sector. The FSPA requires only two hardware channels per array and can be arranged in single and multiple array configurations that deliver wide sector 2-D images. 3-D images can be obtained by scanning the array in a direction perpendicular to the 2-D image field and applying suitable image processing to the multiple scanned 2-D images. This paper introduces the 3-D FSPA concept, theory and design methodology. Finally, results from a prototype array are presented and discussed.

  5. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  6. 3D volumetric modeling of grapevine biomass using Tripod LiDAR

    USGS Publications Warehouse

    Keightley, K.E.; Bawden, G.W.

    2010-01-01

    Tripod mounted laser scanning provides the means to generate high-resolution volumetric measures of vegetation structure and perennial woody tissue for the calculation of standing biomass in agronomic and natural ecosystems. Other than costly destructive harvest methods, no technique exists to rapidly and accurately measure above-ground perennial tissue for woody plants such as Vitis vinifera (common grape vine). Data collected from grapevine trunks and cordons were used to study the accuracy of wood volume derived from laser scanning as compared with volume derived from analog measurements. A set of 10 laser scan datasets were collected for each of 36 vines from which volume was calculated using combinations of two, three, four, six and 10 scans. Likewise, analog volume measurements were made by submerging the vine trunks and cordons in water and capturing the displaced water. A regression analysis examined the relationship between digital and non-digital techniques among the 36 vines and found that the standard error drops rapidly as additional scans are added to the volume calculation process and stabilizes at the four-view geometry with an average Pearson's product moment correlation coefficient of 0.93. Estimates of digital volumes are systematically greater than those of analog volumes and can be explained by the manner in which each technique interacts with the vine tissue. This laser scanning technique yields a highly linear relationship between vine volume and tissue mass revealing a new, rapid and non-destructive method to remotely measure standing biomass. This application shows promise for use in other ecosystems such as orchards and forests. ?? 2010 Elsevier B.V.

  7. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  8. SU-E-T-624: Quantitative Evaluation of 2D Versus 3D Dosimetry for Stereotactic Volumetric Modulated Arc Delivery Using COMPASS

    SciTech Connect

    Vikraman, S; Karrthick, K; Rajesh, T; Sambasivaselli, R; Senniandanvar, V; Kataria, T; Manigandan, D; Karthikeyan, N; Muthukumaran, M

    2014-06-15

    Purpose: The purpose of this study was to evaluate quantitatively 2D versus 3D dosimetry for stereotactic volumetric modulated arc delivery using COMPASS with 2D array. Methods: Twenty-five patients CT images and RT structures of different sites like brain, head and neck, thorax, abdomen and spine were taken from Multiplan planning system for this study. All these patients underwent radical stereotactic treatment in Cyberknife. For each patient, linac based VMAT stereotactic plans were generated in Monaco TPS v 3.1 using Elekta Beam Modulator MLC. Dose prescription was in the range of 5-20Gy/fraction.TPS calculated VMAT plan delivery accuracy was quantitatively evaluated with COMPASS measured dose and calculated dose based on DVH metrics. In order to ascertain the potential of COMPASS 3D dosimetry for stereotactic plan delivery, 2D fluence verification was performed with MatriXX using Multicube. Results: For each site, D{sub 9} {sub 5} was achieved with 100% of prescription dose with maximum 0.05SD. Conformity index (CI) was observed closer to 1.15 in all cases. Maximum deviation of 2.62 % was observed for D{sub 9} {sub 5} when compared TPS versus COMPASS measured. Considerable deviations were observed in head and neck cases compare to other sites. The maximum mean and standard deviation for D{sub 9} {sub 5}, average target dose and average gamma were -0.78±1.72, -1.10±1.373 and 0.39±0.086 respectively. Numbers of pixels passing 2D fluence verification were observed as a mean of 99.36% ±0.455 SD with 3% dose difference and 3mm DTA. For critical organs in head and neck cases, significant dose differences were observed in 3D dosimetry while the target doses were matched well within limit in both 2D and 3D dosimetry. Conclusion: The quantitative evaluations of 2D versus 3D dosimetry for stereotactic volumetric modulated plans showed the potential of highlighting the delivery errors. This study reveals that COMPASS 3D dosimetry is an effective tool for patient

  9. Reduction of attenuation effects in 3D transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Frimmel, Hans; Acosta, Oscar; Fenster, Aaron; Ourselin, Sébastien

    2007-03-01

    Ultrasound (US) is one of the most used imaging modalities today as it is cheap, reliable, safe and widely available. There are a number of issues with US images in general. Besides reflections which is the basis of ultrasonic imaging, other phenomena such as diffraction, refraction, attenuation, dispersion and scattering appear when ultrasound propagates through different tissues. The generated images are therefore corrupted by false boundaries, lack of signal for surface tangential to ultrasound propagation, large amount of noise giving rise to local properties, and anisotropic sampling space complicating image processing tasks. Although 3D Transrectal US (TRUS) probes are not yet widely available, within a few years they will likely be introduced in hospitals. Therefore, the improvement of automatic segmentation from 3D TRUS images, making the process independent of human factor is desirable. We introduce an algorithm for attenuation correction, reducing enhancement/shadowing effects and average attenuation effects in 3D US images, taking into account the physical properties of US. The parameters of acquisition such as logarithmic correction are unknown, therefore no additional information is available to restore the image. As the physical properties are related to the direction of each US ray, the 3D US data set is resampled into cylindrical coordinates using a fully automatic algorithm. Enhancement and shadowing effects, as well as average attenuation effects, are then removed with a rescaling process optimizing simultaneously in and perpendicular to the US ray direction. A set of tests using anisotropic diffusion are performed to illustrate the improvement in image quality, where well defined structures are visible. The evolution of both the entropy and the contrast show that our algorithm is a suitable pre-processing step for segmentation tasks.

  10. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  11. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  12. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  13. Imaging thin-bed reservoirs with 3-D seismic

    SciTech Connect

    Hardage, B.A.

    1996-12-01

    This article explains how a 3-D seismic data volume, a vertical seismic profile (VSP), electric well logs and reservoir pressure data can be used to image closely stacked thin-bed reservoirs. This interpretation focuses on the Oligocene Frio reservoir in South Texas which has multiple thin-beds spanning a vertical interval of about 3,000 ft.

  14. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  15. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  16. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  17. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  18. Extraction and classification of 3D objects from volumetric CT data

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  19. Automated localization of implanted seeds in 3D TRUS images used for prostate brachytherapy

    SciTech Connect

    Wei Zhouping; Gardi, Lori; Downey, Donal B.; Fenster, Aaron

    2006-07-15

    An algorithm has been developed in this paper to localize implanted radioactive seeds in 3D ultrasound images for a dynamic intraoperative brachytherapy procedure. Segmentation of the seeds is difficult, due to their small size in relatively low quality of transrectal ultrasound (TRUS) images. In this paper, intraoperative seed segmentation in 3D TRUS images is achieved by performing a subtraction of the image before the needle has been inserted, and the image after the seeds have been implanted. The seeds are searched in a 'local' space determined by the needle position and orientation information, which are obtained from a needle segmentation algorithm. To test this approach, 3D TRUS images of the agar and chicken tissue phantoms were obtained. Within these phantoms, dummy seeds were implanted. The seed locations determined by the seed segmentation algorithm were compared with those obtained from a volumetric cone-beam flat-panel micro-CT scanner and human observers. Evaluation of the algorithm showed that the rms error in determining the seed locations using the seed segmentation algorithm was 0.98 mm in agar phantoms and 1.02 mm in chicken phantoms.

  20. 3D Winding Number: Theory and Application to Medical Imaging

    PubMed Central

    Becciu, Alessandro; Fuster, Andrea; Pottek, Mark; van den Heuvel, Bart; ter Haar Romeny, Bart; van Assen, Hans

    2011-01-01

    We develop a new formulation, mathematically elegant, to detect critical points of 3D scalar images. It is based on a topological number, which is the generalization to three dimensions of the 2D winding number. We illustrate our method by considering three different biomedical applications, namely, detection and counting of ovarian follicles and neuronal cells and estimation of cardiac motion from tagged MR images. Qualitative and quantitative evaluation emphasizes the reliability of the results. PMID:21317978

  1. 2D/3D image (facial) comparison using camera matching.

    PubMed

    Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C

    2006-11-10

    A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353

  2. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  3. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    NASA Astrophysics Data System (ADS)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  4. Refraction Correction in 3D Transcranial Ultrasound Imaging

    PubMed Central

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  5. Uncertainty assessment of imaging techniques for the 3D reconstruction of stent geometry.

    PubMed

    Cosentino, Daria; Zwierzak, Iwona; Schievano, Silvia; Díaz-Zuccarini, Vanessa; Fenner, John W; Narracott, Andrew J

    2014-08-01

    This paper presents a quantitative assessment of uncertainty for the 3D reconstruction of stents. This study investigates a CP stent (Numed, USA) used in congenital heart disease applications with a focus on the variance in measurements of stent geometry. The stent was mounted on a model of patient implantation site geometry, reconstructed from magnetic resonance images, and imaged using micro-computed tomography (CT), conventional CT, biplane fluoroscopy and optical stereo-photogrammetry. Image data were post-processed to retrieve the 3D stent geometry. Stent strut length, separation angle and cell asymmetry were derived and repeatability was assessed for each technique along with variation in relation to μCT data, assumed to represent the gold standard. The results demonstrate the performance of biplanar reconstruction methods is comparable with volumetric CT scans in evaluating 3D stent geometry. Uncertainty on the evaluation of strut length, separation angle and cell asymmetry using biplanar fluoroscopy is of the order ±0.2mm, 3° and 0.03, respectively. These results support the use of biplanar fluoroscopy for in vivo measurement of 3D stent geometry and provide quantitative assessment of uncertainty in the measurement of geometric parameters.

  6. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  7. Optical-CT imaging of complex 3D dose distributions

    NASA Astrophysics Data System (ADS)

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  8. 3D robust digital image correlation for vibration measurement.

    PubMed

    Chen, Zhong; Zhang, Xianmin; Fatikow, Sergej

    2016-03-01

    Discrepancies of speckle images under dynamic measurement due to the different viewing angles will deteriorate the correspondence in 3D digital image correlation (3D-DIC) for vibration measurement. Facing this kind of bottleneck, this paper presents two types of robust 3D-DIC methods for vibration measurement, SSD-robust and SWD-robust, which use a sum of square difference (SSD) estimator plus a Geman-McClure regulating term and a Welch estimator plus a Geman-McClure regulating term, respectively. Because the regulating term with an adaptive rejecting bound can lessen the influence of the abnormal pixel data in the dynamical measuring process, the robustness of the algorithm is enhanced. The robustness and precision evaluation experiments using a dual-frequency laser interferometer are implemented. The experimental results indicate that the two presented robust estimators can suppress the effects of the abnormality in the speckle images and, meanwhile, keep higher precision in vibration measurement in contrast with the traditional SSD method; thus, the SWD-robust and SSD-robust methods are suitable for weak image noise and strong image noise, respectively. PMID:26974624

  9. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  10. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  11. Extraction of 3D information from sonar image sequences.

    PubMed

    Trucco, A; Curletto, S

    2003-01-01

    This paper describes a set of methods that make it possible to estimate the position of a feature inside a three-dimensional (3D) space by starting from a sequence of two-dimensional (2D) acoustic images of the seafloor acquired with a sonar system. Typical sonar imaging systems are able to generate just 2D images, and the acquisition of 3D information involves sharp increases in complexity and costs. The front-scan sonar proposed in this paper is a new equipment devoted to acquiring a 2D image of the seafloor to sail over, and allows one to collect a sequence of images showing a specific feature during the approach of the ship. This fact seems to make it possible to recover the 3D position of a feature by comparing the feature positions along the sequence of images acquired from different (known) ship positions. This opportunity is investigated in the paper, where it is shown that encouraging results have been obtained by a processing chain composed of some blocks devoted to low-level processing, feature extraction and analysis, a Kalman filter for robust feature tracking, and some ad hoc equations for depth estimation and averaging. A statistical error analysis demonstrated the great potential of the proposed system also if some inaccuracies affect the sonar measures and the knowledge of the ship position. This was also confirmed by several tests performed on both simulated and real sequences, obtaining satisfactory results on both the feature tracking and, above all, the estimation of the 3D position.

  12. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  13. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  14. 3D imaging of fetus vertebra by synchrotron radiation microtomography

    NASA Astrophysics Data System (ADS)

    Peyrin, Francoise; Pateyron-Salome, Murielle; Denis, Frederic; Braillon, Pierre; Laval-Jeantet, Anne-Marie; Cloetens, Peter

    1997-10-01

    A synchrotron radiation computed microtomography system allowing high resolution 3D imaging of bone samples has been developed at ESRF. The system uses a high resolution 2D detector based on a CCd camera coupled to a fluorescent screen through light optics. The spatial resolution of the device is particularly well adapted to the imaging of bone structure. In view of studying growth, vertebra samples of fetus with differential gestational ages were imaged. The first results show that fetus vertebra is quite different from adult bone both in terms of density and organization.

  15. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4μJ pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of λ=1.5 μm. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  16. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

  17. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  18. Method for extracting the aorta from 3D CT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2007-03-01

    Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

  19. Compton coincidence volumetric imaging: a new x-ray volumetric imaging modality based on Compton scattering

    NASA Astrophysics Data System (ADS)

    Xu, Xiaochao

    2014-03-01

    Compton scattering is a dominant interaction during radiography and computed tomography x-ray imaging. However, the scattered photons are not used for extracting imaging information, but seriously degrade image quality. Here we introduce a new scheme that overcomes most of the problems associated with existing Compton scattering imaging schemes and allows Compton scattered photons to be effectively used for imaging. In our scheme, referred as Compton coincidence volumetric imaging (CCVI), a collimated monoenergetic x-ray beam is directed onto a thin semiconductor detector. A small portion of the photons is Compton scattered by the detector and their energy loss is detected. Some of the scattered photons intersect the imaging object, where they are Compton scattered a second time. The finally scattered photons are recorded by an areal energy resolving detector panel around the object. The two detectors work in coincidence mode. CCVI images the spatial electron density distribution in the imaging object. Similar to PET imaging, the event location can be located within a curve; therefore the imaging reconstruction algorithms are also similar to those of PET. Two statistical iterative imaging reconstruction algorithms are tested. Our study verifies the feasibility of CCVI in imaging acquisition and reconstruction. Various aspects of CCVI are discussed. If successfully implemented, it will offer a great potential for imaging dose reduction compared with x-ray CT. Furthermore, a CCVI modality will have no moving parts, which potentially offers cost reduction and faster imaging speed.

  20. 1D-3D registration for intra-operative nuclear imaging in radio-guided surgery.

    PubMed

    Vetter, Christoph; Lasser, Tobias; Okur, Asli; Navab, Nassir

    2015-02-01

    3D functional nuclear imaging modalities like SPECT or PET provide valuable information, as small structures can be marked with radioactive tracers to be localized before surgery. This positional information is valuable during surgery as well, for example when locating potentially cancerous lymph nodes in the case of breast cancer. However, the volumetric information provided by pre-operative SPECT scans loses validity quickly due to posture changes and manipulation of the soft tissue during surgery. During the intervention, the surgeon has to rely on the acoustic feedback provided by handheld gamma-detectors in order to localize the marked structures. In this paper, we present a method that allows updating the pre-operative image with a very limited number of tracked readings. A previously acquired 3D functional volume serves as prior knowledge and a limited number of new 1D detector readings is used in order to update the prior knowledge. This update is performed by a 1D-3D registration algorithm that registers the volume to the detector readings. This enables the rapid update of the visual guidance provided to the surgeon during a radio-guided surgery without slowing down the surgical workflow. We evaluate the performance of this approach using Monte-Carlo simulations, phantom experiments and patient data, resulting in a positional error of less than 8 mm which is acceptable for surgery. The 1D-3D registration is also compared to a volumetric reconstruction using the tracked detector measurements without taking prior information into account, and achieves a comparable accuracy with significantly less measurements.

  1. Phantom image results of an optimized full 3D USCT

    NASA Astrophysics Data System (ADS)

    Ruiter, Nicole V.; Zapf, Michael; Hopp, Torsten; Dapp, Robin; Gemmeke, Hartmut

    2012-03-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3DUSCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3DUSCT with nearly isotropic 3DPSF, realizing for the first time the full benefits of a 3Dsystem. In this paper results of the 3D point spread function measured with a dedicated phantom and images acquired with a clinical breast phantom are presented. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3TeslaMRI volume of the breast phantom. Image quality and resolution is isotropic in all three dimensions, confirming the successful optimization experimentally.

  2. 3D reconstruction of concave surfaces using polarisation imaging

    NASA Astrophysics Data System (ADS)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  3. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. PMID:21602004

  4. 3-D volumetric computed tomographic scoring as an objective outcome measure for chronic rhinosinusitis: Clinical correlations and comparison to Lund-Mackay scoring

    PubMed Central

    Pallanch, John; Yu, Lifeng; Delone, David; Robb, Rich; Holmes, David R.; Camp, Jon; Edwards, Phil; McCollough, Cynthia H.; Ponikau, Jens; Dearking, Amy; Lane, John; Primak, Andrew; Shinkle, Aaron; Hagan, John; Frigas, Evangelo; Ocel, Joseph J.; Tombers, Nicole; Siwani, Rizwan; Orme, Nicholas; Reed, Kurtis; Jerath, Nivedita; Dhillon, Robinder; Kita, Hirohito

    2014-01-01

    Background We aimed to test the hypothesis that 3-D volume-based scoring of computed tomographic (CT) images of the paranasal sinuses was superior to Lund-Mackay CT scoring of disease severity in chronic rhinosinusitis (CRS). We determined correlation between changes in CT scores (using each scoring system) with changes in other measures of disease severity (symptoms, endoscopic scoring, and quality of life) in patients with CRS treated with triamcinolone. Methods The study group comprised 48 adult subjects with CRS. Baseline symptoms and quality of life were assessed. Endoscopy and CT scans were performed. Patients received a single systemic dose of intramuscular triamcinolone and were reevaluated 1 month later. Strengths of the correlations between changes in CT scores and changes in CRS signs and symptoms and quality of life were determined. Results We observed some variability in degree of improvement for the different symptom, endoscopic, and quality-of-life parameters after treatment. Improvement of parameters was significantly correlated with improvement in CT disease score using both CT scoring methods. However, volumetric CT scoring had greater correlation with these parameters than Lund-Mackay scoring. Conclusion Volumetric scoring exhibited higher degree of correlation than Lund-Mackay scoring when comparing improvement in CT score with improvement in score for symptoms, endoscopic exam, and quality of life in this group of patients who received beneficial medical treatment for CRS. PMID:24106202

  5. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  6. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  7. 3-D volumetric MRI evaluation of the placenta in fetuses with complex congenital heart disease

    PubMed Central

    Andescavage, Nickie; Yarish, Alexa; Donofrio, Mary; Bulas, Dorothy; Evangelou, Iordanis; Vezina, Gilbert; McCarter, Robert; DuPlessis, Adre; Limperopoulos, Catherine

    2015-01-01

    Introduction Placental insufficiency remains a common cause of perinatal mortality and neurodevelopmental morbidity. Congenital heart disease (CHD) in the fetus and its relationship to placental function is unknown. This study explores placental health and its relationship to neonatal outcomes by comparing placental volumes in healthy pregnancies and pregnancies complicated by CHD using in vivo three-dimensional MRI studies. Methods In a prospective observational study, pregnant women greater than 18 weeks gestation with normal pregnancies or pregnancies complicated by CHD were recruited and underwent fetal MR imaging. The placenta was manually outlined and the volume was calculated in cm3. Brain volume was also calculated and clinical data were also collected. Relationships, including interactive effects, between placental and fetal growth, including brain growth, were evaluated using longitudinal multiple linear regression analysis. Results 135 women underwent fetal MRI between 18 and 39 weeks gestation (mean 31.6 ± 4.4). Placental volume increased exponentially with gestational age (p=0.041). Placental volume was positively associated with birth weight (p <0.001) and increased more steeply with birth weight in CHD-affected fetuses (p=0.046). Total brain and cerebral volumes were smaller in the CHD group (p<0.001), but brainstem volume (p<0.001) was larger. Placental volumes were not associated with brain volumes. Discussion Impaired placental growth in CHD is associated with gestational age and birth weight at delivery. Abnormalities in placental development may contribute to the significant morbidity in this high-risk population. Assessment of placental volume by MRI allows for in vivo assessments of placental development. PMID:26190037

  8. High-speed 3D imaging by DMD technology

    NASA Astrophysics Data System (ADS)

    Hoefling, Roland

    2004-05-01

    The paper presents an advanced solution for capturing the height of an object in addition to the 2D image as it is frequently desired in machine vision applications. Based upon the active fringe projection methodology, the system takes advantage of a series of patterns projected onto the object surface and observed by a camera to provide reliable, accurate and highly resolved 3D data from any scattering object surface. The paper shows how the recording of a projected image series can be significantly accelerated and improved in quality to overcome current limitations. The key is ALP - a metrology dedicated hardware design using the Discovery 1100 platform for the DMD micromirror device of Texas Instruments Inc. The paper describes how this DMD technology has been combined with latest LED illumination, high-performance optics, and recent digital camera solutions. The ALP based DMD projection can be exactly synchronized with one or multiple cameras so that gray value intensities generated by pulse-width modulation (PWM) are recorded with high linearity. Based upon these components, a novel 3D measuring system with outstanding properties is described. The "z-Snapper" represents a new class of 3D imaging devices, it is fast enough for time demanding in-line testing, and it can be built completely mobile: laptop based, hand-held, and battery powered. The turnkey system provides a "3D image" as simple as an usual b/w picture is grabbed. It can be instantly implemented into future machine vision applications that will benefit from the step into the third dimension.

  9. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  10. Validation of image processing tools for 3-D fluorescence microscopy.

    PubMed

    Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric

    2002-04-01

    3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.

  11. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  12. Feasibility of Using Volumetric Contrast-Enhanced Ultrasound with a 3-D Transducer to Evaluate Therapeutic Response after Targeted Therapy in Rabbit Hepatic VX2 Carcinoma.

    PubMed

    Kim, Jeehyun; Kim, Jung Hoon; Yoon, Soon Ho; Choi, Won Seok; Kim, Young Jae; Han, Joon Koo; Choi, Byung-Ihn

    2015-12-01

    The aim of this study was to assess the feasibility of using dynamic contrast-enhanced ultrasound (DCE-US) with a 3-D transducer to evaluate therapeutic responses to targeted therapy. Rabbits with hepatic VX2 carcinomas, divided into a treatment group (n = 22, 30 mg/kg/d sorafenib) and a control group (n = 13), were evaluated with DCE-US using 2-D and 3-D transducers and computed tomography (CT) perfusion imaging at baseline and 1 d after the first treatment. Perfusion parameters were collected, and correlations between parameters were analyzed. In the treatment group, both volumetric and 2-D DCE-US perfusion parameters, including peak intensity (33.2 ± 19.9 vs. 16.6 ± 10.7, 63.7 ± 20.0 vs. 30.1 ± 19.8), slope (15.3 ± 12.4 vs. 5.7 ± 4.5, 37.3 ± 20.4 vs. 15.7 ± 13.0) and area under the curve (AUC; 1004.1 ± 560.3 vs. 611.4 ± 421.1, 1332.2 ± 708.3 vs. 670.4 ± 388.3), had significantly decreased 1 d after the first treatment (p = 0.00). In the control group, 2-D DCE-US revealed that peak intensity, time to peak and slope had significantly changed (p < 0.05); however, volumetric DCE-US revealed that peak intensity, time-intensity AUC, AUC during wash-in and AUC during wash-out had significantly changed (p = 0.00). CT perfusion imaging parameters, including blood flow, blood volume and permeability of the capillary vessel surface, had significantly decreased in the treatment group (p = 0.00); however, in the control group, peak intensity and blood volume had significantly increased (p = 0.00). It is feasible to use DCE-US with a 3-D transducer to predict early therapeutic response after targeted therapy because perfusion parameters, including peak intensity, slope and AUC, significantly decreased, which is similar to the trend observed for 2-D DCE-US and CT perfusion imaging parameters. PMID:26365926

  13. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    SciTech Connect

    Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A.

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization

  14. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation

    PubMed Central

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-01-01

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI. PMID:26191497

  15. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  16. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  17. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    PubMed

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  18. Volumetric LiDAR scanning of a wind turbine wake and comparison with a 3D analytical wake model

    NASA Astrophysics Data System (ADS)

    Carbajo Fuertes, Fernando; Porté-Agel, Fernando

    2016-04-01

    A correct estimation of the future power production is of capital importance whenever the feasibility of a future wind farm is being studied. This power estimation relies mostly on three aspects: (1) a reliable measurement of the wind resource in the area, (2) a well-established power curve of the future wind turbines and, (3) an accurate characterization of the wake effects; the latter being arguably the most challenging one due to the complexity of the phenomenon and the lack of extensive full-scale data sets that could be used to validate analytical or numerical models. The current project addresses the problem of obtaining a volumetric description of a full-scale wake of a 2MW wind turbine in terms of velocity deficit and turbulence intensity using three scanning wind LiDARs and two sonic anemometers. The characterization of the upstream flow conditions is done by one scanning LiDAR and two sonic anemometers, which have been used to calculate incoming vertical profiles of horizontal wind speed, wind direction and an approximation to turbulence intensity, as well as the thermal stability of the atmospheric boundary layer. The characterization of the wake is done by two scanning LiDARs working simultaneously and pointing downstream from the base of the wind turbine. The direct LiDAR measurements in terms of radial wind speed can be corrected using the upstream conditions in order to provide good estimations of the horizontal wind speed at any point downstream of the wind turbine. All this data combined allow for the volumetric reconstruction of the wake in terms of velocity deficit as well as turbulence intensity. Finally, the predictions of a 3D analytical model [1] are compared to the 3D LiDAR measurements of the wind turbine. The model is derived by applying the laws of conservation of mass and momentum and assuming a Gaussian distribution for the velocity deficit in the wake. This model has already been validated using high resolution wind-tunnel measurements

  19. 3D imaging of soil pore network: two different approaches

    NASA Astrophysics Data System (ADS)

    Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

    2009-04-01

    Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 µm per voxel resulted using X-ray Micro CT while 20 µm was the best value using the serial sectioning

  20. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  1. RV functional imaging: 3-D echo-derived dynamic geometry and flow field simulations.

    PubMed

    Pasipoularides, Ares D; Shu, Ming; Womack, Michael S; Shah, Ashish; Von Ramm, Olaf; Glower, Donald D

    2003-01-01

    We describe a novel functional imaging approach for quantitative analysis of right ventricular (RV) blood flow patterns in specific experimental animals (or humans) using real-time, three-dimensional (3-D) echocardiography (RT3D). The method is independent of the digital imaging modality used. It comprises three parts. First, a semiautomated segmentation aided by intraluminal contrast medium locates the RV endocardial surface. Second, a geometric scheme for dynamic RV chamber reconstruction applies a time interpolation procedure to the RT3D data to quantify wall geometry and motion at 400 Hz. A volumetric prism method validated the dynamic geometric reconstruction against simultaneous sonomicrometric canine measurements. Finally, the RV endocardial border motion information is used for mesh generation on a computational fluid dynamics solver to simulate development of the early RV diastolic inflow field. Boundary conditions (tessellated endocardial surface nodal velocities) for the solver are directly derived from the endocardial geometry and motion information. The new functional imaging approach may yield important kinematic information on the distribution of instantaneous velocities in the RV diastolic flow field of specific normal or diseased hearts. PMID:12388220

  2. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  3. Sparse aperture 3D passive image sensing and recognition

    NASA Astrophysics Data System (ADS)

    Daneshpanah, Mehdi

    The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results

  4. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  5. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  6. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  7. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  8. 3D super-resolution imaging with blinking quantum dots.

    PubMed

    Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R

    2013-11-13

    Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (fwhm) of 8-17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3-7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells.

  9. Scattering robust 3D reconstruction via polarized transient imaging.

    PubMed

    Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai

    2016-09-01

    Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944

  10. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  11. 3D Imaging of the OH mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Kouahla, M. N.; Moreels, G.; Faivre, M.; Clairemidi, J.; Meriwether, J. W.; Lehmacher, G. A.; Vidal, E.; Veliz, O.

    2010-01-01

    A new and original stereo imaging method is introduced to measure the altitude of the OH nightglow layer and provide a 3D perspective map of the altitude of the layer centroid. Near-IR photographs of the OH layer are taken at two sites separated by a 645 km distance. Each photograph is processed in order to provide a satellite view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized cross-correlation coefficient (NCC). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12°09‧08.2″ S, 75°33‧49.3″ W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16°33‧17.6″ S, 71°39‧59.4″ W, altitude 2272 m) close to Arequipa. 3D maps of the layer surface were retrieved and compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 86.3 km on July 26. Comparable relief wavy features appear in the 3D and intensity maps. It is shown that the vertical amplitude of the wave system varies as exp (Δz/2H) within the altitude range Δz = 83.5-88.0 km, H being the scale height. The oscillatory kinetic energy at the altitude of the OH layer is comprised between 3 × 10-4 and 5.4 × 10-4 J/m3, which is 2-3 times smaller than the values derived from partial radio wave at 52°N latitude.

  12. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  13. 3D subcellular SIMS imaging in cryogenically prepared single cells

    NASA Astrophysics Data System (ADS)

    Chandra, Subhash

    2004-06-01

    The analysis of a cell with dynamic SIMS ion microscopy depends on the gradual erosion (sputtering) of the cell surface for obtaining spatially resolved chemical information in the X-, Y-, and Z-dimensions. This ideal feature of ion microscopy is rarely explored in probing microfeatures hidden beneath the cell surface. In this study, this capability is explored for the analysis of cells undergoing cell division. The mitotic cells required 3D SIMS imaging in order to study the chemical composition of specialized subcellular regions, like the mitotic spindle, hidden beneath the cell surface. Human glioblastoma T98G cells were grown on silicon chips and cryogenically prepared with a sandwich freeze-fracture method. The fractured freeze-dried cells were used for SIMS analysis with the microscope mode of the CAMECA IMS-3f, which is capable of producing 500 nm lateral image resolution. SIMS analysis of calcium in the spindle region of metaphase cells required sequential recording of as many as 10 images. The T98G human glioblastoma tumor cells revealed an unusual depletion/lack of calcium store in the metaphase spindle, which is in contrast to the accumulation of calcium stores generally observed in normal cells. This study shows the feasibility of the microscope mode imaging in resolving subcellular microfeatures in 3D and opens new avenues of research in spatially resolved chemical analysis of dividing cells.

  14. Multiple 2D video/3D medical image registration algorithm

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    2000-06-01

    In this paper we propose a novel method to register at least two vide images to a 3D surface model. The potential applications of such a registration method could be in image guided surgery, high precision radiotherapy, robotics or computer vision. Registration is performed by optimizing a similarity measure with respect to the pose parameters. The similarity measure is based on 'photo-consistency' and computes for each surface point, how consistent the corresponding video image information in each view is with a lighting model. We took four video views of a volunteer's face, and used an independent method to reconstruct a surface that was intrinsically registered to the four views. In addition, we extracted a skin surface from the volunteer's MR scan. The surfaces were misregistered from a gold standard pose and our algorithm was used to register both types of surfaces to the video images. For the reconstructed surface, the mean 3D error was 1.53 mm. For the MR surface, the standard deviation of the pose parameters after registration ranged from 0.12 to 0.70 mm and degrees. The performance of the algorithm is accurate, precise and robust.

  15. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  16. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  17. Real Time Gabor-Domain Optical Coherence Microscopy for 3D Imaging.

    PubMed

    Rolland, Jannick P; Canavesi, Cristina; Tankam, Patrice; Cogliati, Andrea; Lanis, Mara; Santhanam, Anand P

    2016-01-01

    Fast, robust, nondestructive 3D imaging is needed for the characterization of microscopic tissue structures across various clinical applications. A custom microelectromechanical system (MEMS)-based 2D scanner was developed to achieve, together with a multi-level GPU architecture, 55 kHz fast-axis A-scan acquisition in a Gabor-domain optical coherence microscopy (GD-OCM) custom instrument. GD-OCM yields high-definition micrometer-class volumetric images. A dynamic depth of focusing capability through a bio-inspired liquid lens-based microscope design, as in whales' eyes, was developed to enable the high definition instrument throughout a large field of view of 1 mm3 volume of imaging. Developing this technology is prime to enable integration within the workflow of clinical environments. Imaging at an invariant resolution of 2 μm has been achieved throughout a volume of 1 × 1 × 0.6 mm3, acquired in less than 2 minutes. Volumetric scans of human skin in vivo and an excised human cornea are presented. PMID:27046601

  18. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  19. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  20. Development of 3D microwave imaging reflectometry in LHD (invited).

    PubMed

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  1. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields. PMID:26285181

  2. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  3. Anatomical delineation of congenital heart disease using 3D magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Adams Bornemeier, Renee; Fellows, Kenneth E.; Fogel, Mark A.; Weinberg, Paul M.

    1994-05-01

    Anatomic delineation of the heart and great vessels is a necessity when managing children with congenital heart disease. Spatial orientation of the vessels and chambers in the heart and the heart itself may be quite abnormal. Though magnetic resonance imaging provides a noninvasive means for determining the anatomy, the intricate interrelationships between many structures are difficult to conceptualize from a 2-D format. Taking the 2-D images and using a volumetric analysis package allows for a 3-D replica of the heart to be created. This model can then be used to view the anatomy and spatial arrangement of the cardiac structures. This information may be utilized by the physicians to assist in the clinical management of these children.

  4. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  5. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  6. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the

  7. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  8. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  9. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  10. Volumetric display using a rotating prism sheet as an optical image scanner.

    PubMed

    Maeda, Yuki; Miyazaki, Daisuke; Mukai, Takaaki

    2013-01-01

    We developed a volumetric display that uses a rotating prism sheet as an optical scanner. A cross-sectional image of a three-dimensional (3D) object was moved laterally by the rotating prism sheet. A stack of the cross-sectional images constructed a 3D volume image that satisfies all requirements of stereoscopic vision. Since the mechanical load of the proposed scanning method was small, it is easy to enlarge the effective area of the scanner and its scanning area. We used a concave mirror to collimate rays emitted from each point to reduce the aberration caused at the prism sheet. A displayed 3D image had a size of 7 cm × 5 cm × 7 cm and a resolution of 1024 × 768 × 200 voxels.

  11. Semiautomatic segmentation of liver metastases on volumetric CT images

    SciTech Connect

    Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng

    2015-11-15

    Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accurately delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation

  12. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  13. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  14. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Barnowski, Ross; Haefner, Andrew; Mihailescu, Lucian; Vetter, Kai

    2015-11-01

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  15. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. PMID:22392604

  16. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  17. Tactile-optical 3D sensor applying image processing

    NASA Astrophysics Data System (ADS)

    Neuschaefer-Rube, Ulrich; Wissmann, Mark

    2009-01-01

    The tactile-optical probe (so-called fiber probe) is a well-known probe in micro-coordinate metrology. It consists of an optical fiber with a probing element at its end. This probing element is adjusted in the imaging plane of the optical system of an optical coordinate measuring machine (CMM). It can be illuminated through the fiber by a LED. The position of the probe is directly detected by image processing algorithms available in every modern optical CMM and not by deflections at the fixation of the probing shaft. Therefore, the probing shaft can be very thin and flexible. This facilitates the measurement with very small probing forces and the realization of very small probing elements (diameter: down to 10 μm). A limitation of this method is that at present the probe does not have full 3D measurement capability. At the Physikalisch-Technische Bundesanstalt (PTB), several arrangements and measurement principles for a full 3D tactile-optical probe have been implemented and tested successfully in cooperation with Werth-Messtechnik, Giessen, Germany. This contribution provides an overview of the results of these activities.

  18. Recent progress in 3-D imaging of sea freight containers

    NASA Astrophysics Data System (ADS)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  19. Quantitative validation of 3D image registration techniques

    NASA Astrophysics Data System (ADS)

    Holton Tainter, Kerrie S.; Taneja, Udita; Robb, Richard A.

    1995-05-01

    Multimodality images obtained from different medical imaging systems such as magnetic resonance (MR), computed tomography (CT), ultrasound (US), positron emission tomography (PET), single photon emission computed tomography (SPECT) provide largely complementary characteristic or diagnostic information. Therefore, it is an important research objective to `fuse' or combine this complementary data into a composite form which would provide synergistic information about the objects under examination. An important first step in the use of complementary fused images is 3D image registration, where multi-modality images are brought into spatial alignment so that the point-to-point correspondence between image data sets is known. Current research in the field of multimodality image registration has resulted in the development and implementation of several different registration algorithms, each with its own set of requirements and parameters. Our research has focused on the development of a general paradigm for measuring, evaluating and comparing the performance of different registration algorithms. Rather than evaluating the results of one algorithm under a specific set of conditions, we suggest a general approach to validation using simulation experiments, where the exact spatial relationship between data sets is known, along with phantom data, to characterize the behavior of an algorithm via a set of quantitative image measurements. This behavior may then be related to the algorithm's performance with real patient data, where the exact spatial relationship between multimodality images is unknown. Current results indicate that our approach is general enough to apply to several different registration algorithms. Our methods are useful for understanding the different sources of registration error and for comparing the results between different algorithms.

  20. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  1. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  2. Quantification of cerebral ventricle volume change of preterm neonates using 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Chen, Yimin; Kishimoto, Jessica; Qiu, Wu; de Ribaupierre, Sandrine; Fenster, Aaron; Chiu, Bernard

    2015-03-01

    Intraventricular hemorrhage (IVH) is a major cause of brain injury in preterm neonates. Quantitative measurement of ventricular dilation or shrinkage is important for monitoring patients and in evaluation of treatment options. 3D ultrasound (US) has been used to monitor the ventricle volume as a biomarker for ventricular dilation. However, volumetric quantification does not provide information as to where dilation occurs. The location where dilation occurs may be related to specific neurological problems later in life. For example, posterior horn enlargement, with thinning of the corpus callosum and parietal white matter fibres, could be linked to poor visuo-spatial abilities seen in hydrocephalic children. In this work, we report on the development and application of a method used to analyze local surface change of the ventricles of preterm neonates with IVH from 3D US images. The technique is evaluated using manual segmentations from 3D US images acquired in two imaging sessions. The surfaces from baseline and follow-up were registered and then matched on a point-by-point basis. The distance between each pair of corresponding points served as an estimate of local surface change of the brain ventricle at each vertex. The measurements of local surface change were then superimposed on the ventricle surface to produce the 3D local surface change map that provide information on the spatio-temporal dilation pattern of brain ventricles following IVH. This tool can be used to monitor responses to different treatment options, and may provide important information for elucidating the deficiencies a patient will have later in life.

  3. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  4. Size-based emphysema cluster analysis on low attenuation area in 3D volumetric CT: comparison with pulmonary functional test

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Kim, Namkug; Lee, Sang Min; Seo, Joon Beom; Oh, Sang Young

    2015-03-01

    To quantify low attenuation area (LAA) of emphysematous regions according to cluster size in 3D volumetric CT data of chronic obstructive pulmonary disease (COPD) patients and to compare these indices with their pulmonary functional test (PFT). Sixty patients with COPD were scanned by a more than 16-multi detector row CT scanner (Siemens Sensation 16 and 64) within 0.75mm collimation. Based on these LAA masks, a length scale analysis to estimate each emphysema LAA's size was performed as follows. At first, Gaussian low pass filter from 30mm to 1mm kernel size with 1mm interval on the mask was performed from large to small size, iteratively. Centroid voxels resistant to the each filter were selected and dilated by the size of the kernel, which was regarded as the specific size emphysema mask. The slopes of area and number of size based LAA (slope of semi-log plot) were analyzed and compared with PFT. PFT parameters including DLco, FEV1, and FEV1/FVC were significantly (all p-value< 0.002) correlated with the slopes (r-values; -0.73, 0.54, 0.69, respectively) and EI (r-values; -0.84, -0.60, -0.68, respectively). In addition, the D independently contributed regression for FEV1 and FEV1/FVC (adjust R sq. of regression study: EI only, 0.70, 0.45; EI and D, 0.71, 0.51, respectively). By the size based LAA segmentation and analysis, we evaluated the Ds of area, number, and distribution of size based LAA, which would be independent factors for predictor of PFT parameters.

  5. Active and interactive floating image display using holographic 3D images

    NASA Astrophysics Data System (ADS)

    Morii, Tsutomu; Sakamoto, Kunio

    2006-08-01

    We developed a prototype tabletop holographic display system. This system consists of the object recognition system and the spatial imaging system. In this paper, we describe the recognition system using an RFID tag and the 3D display system using a holographic technology. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1,2,3. The purpose of this paper is to propose the interactive system using these 3D imaging technologies. In this paper, the authors describe the interactive tabletop 3D display system. The observer can view virtual images when the user puts the special object on the display table. The key technologies of this system are the object recognition system and the spatial imaging display.

  6. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  7. Compensation of log-compressed images for 3-D ultrasound.

    PubMed

    Sanches, João M; Marques, Jorge S

    2003-02-01

    In this study, a Bayesian approach was used for 3-D reconstruction in the presence of multiplicative noise and nonlinear compression of the ultrasound (US) data. Ultrasound images are often considered as being corrupted by multiplicative noise (speckle). Several statistical models have been developed to represent the US data. However, commercial US equipment performs a nonlinear image compression that reduces the dynamic range of the US signal for visualization purposes. This operation changes the distribution of the image pixels, preventing a straightforward application of the models. In this paper, the nonlinear compression is explicitly modeled and considered in the reconstruction process, where the speckle noise present in the radio frequency (RF) US data is modeled with a Rayleigh distribution. The results obtained by considering the compression of the US data are then compared with those obtained assuming no compression. It is shown that the estimation performed using the nonlinear log-compression model leads to better results than those obtained with the Rayleigh reconstruction method. The proposed algorithm is tested with synthetic and real data and the results are discussed. The results have shown an improvement in the reconstruction results when the compression operation is included in the image formation model, leading to sharper images with enhanced anatomical details.

  8. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  9. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  10. Evaluation of 3D pre-treatment verification for volumetric modulated arc therapy plan in head region

    NASA Astrophysics Data System (ADS)

    Ruangchan, S.; Oonsiri, S.; Suriyapee, S.

    2016-03-01

    The development of pre-treatment QA tools contributes to the three dimension (3D) dose verification using the calculation software with the measured planar dose distribution. This research is aimed to evaluate the Sun Nuclear 3DVH software with Thermo luminescence dosimeter (TLD) measurement. The two VMAT patient plans (2.5 arcs) of 6 MV photons with different PTV locations were transferred to the Rando phantom images. The PTV of the first plan located in homogeneous area and vice versa in the second plan. For treatment planning process, the Rando phantom images were employed in optimization and calculation with the PTV, brain stem, lens and TLD position contouring. The verification plans were created, transferred to the ArcCHECK for measurement and calculated the 3D dose using 3DVH software. The range of the percent dose differences in both PTV and organ at risk (OAR) between TLD and 3DVH software of the first and the second plans were -2.09 to 3.87% and -1.39 to 6.88%, respectively. The mean percent dose differences for the PTV were 1.62% and 3.93% for the first and the second plans, respectively. In conclusion, the 3DVH software results show good agreement with TLD when the tumor located in the homogeneous area.

  11. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy

    PubMed Central

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-01-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination. PMID:27231617

  12. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy.

    PubMed

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-05-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination. PMID:27231617

  13. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy.

    PubMed

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-05-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination.

  14. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  15. 3D lung image retrieval using localized features

    NASA Astrophysics Data System (ADS)

    Depeursinge, Adrien; Zrimec, Tatjana; Busayarat, Sata; Müller, Henning

    2011-03-01

    The interpretation of high-resolution computed tomography (HRCT) images of the chest showing disorders of the lung tissue associated with interstitial lung diseases (ILDs) is time-consuming and requires experience. Whereas automatic detection and quantification of the lung tissue patterns showed promising results in several studies, its aid for the clinicians is limited to the challenge of image interpretation, letting the radiologists with the problem of the final histological diagnosis. Complementary to lung tissue categorization, providing visually similar cases using content-based image retrieval (CBIR) is in line with the clinical workflow of the radiologists. In a preliminary study, a Euclidean distance based on volume percentages of five lung tissue types was used as inter-case distance for CBIR. The latter showed the feasibility of retrieving similar histological diagnoses of ILD based on visual content, although no localization information was used for CBIR. However, to retrieve and show similar images with pathology appearing at a particular lung position was not possible. In this work, a 3D localization system based on lung anatomy is used to localize low-level features used for CBIR. When compared to our previous study, the introduction of localization features allows improving early precision for some histological diagnoses, especially when the region of appearance of lung tissue disorders is important.

  16. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  17. Brain surface maps from 3-D medical images

    NASA Astrophysics Data System (ADS)

    Lu, Jiuhuai; Hansen, Eric W.; Gazzaniga, Michael S.

    1991-06-01

    The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.

  18. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  19. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection

    PubMed Central

    Meaney, Paul M.; Kaufman, Peter A.; diFlorio-Alexander, Roberta M.; Paulsen, Keith D.

    2013-01-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  20. Analysis of direct clinical consequences of MLC positional errors in volumetric-modulated arc therapy using 3D dosimetry system.

    PubMed

    Nithiyanantham, Karthikeyan; Mani, Ganesh K; Subramani, Vikraman; Mueller, Lutz; Palaniappan, Karrthick K; Kataria, Tejinder

    2015-09-08

    In advanced, intensity-modulated external radiotherapy facility, the multileaf collimator has a decisive role in the beam modulation by creating multiple segments or dynamically varying field shapes to deliver a uniform dose distribution to the target with maximum sparing of normal tissues. The position of each MLC leaf has become more critical for intensity-modulated delivery (step-and-shoot IMRT, dynamic IMRT, and VMAT) compared to 3D CRT, where it defines only field boundaries. We analyzed the impact of the MLC positional errors on the dose distribution for volumetric-modulated arc therapy, using a 3D dosimetry system. A total of 15 VMAT cases, five each for brain, head and neck, and prostate cases, were retrospectively selected for the study. All the plans were generated in Monaco 3.0.0v TPS (Elekta Corporation, Atlanta, GA) and delivered using Elekta Synergy linear accelerator. Systematic errors of +1, +0.5, +0.3, 0, -1, -0.5, -0.3 mm were introduced in the MLC bank of the linear accelerator and the impact on the dose distribution of VMAT delivery was measured using the COMPASS 3D dosim-etry system. All the plans were created using single modulated arcs and the dose calculation was performed using a Monte Carlo algorithm in a grid size of 3 mm. The clinical endpoints D95%, D50%, D2%, and Dmax,D20%, D50% were taken for the evaluation of the target and critical organs doses, respectively. A significant dosimetric effect was found for many cases even with 0.5 mm of MLC positional errors. The average change of dose D 95% to PTV for ± 1 mm, ± 0.5 mm, and ±0.3mm was 5.15%, 2.58%, and 0.96% for brain cases; 7.19%, 3.67%, and 1.56% for head and neck cases; and 8.39%, 4.5%, and 1.86% for prostate cases, respectively. The average deviation of dose Dmax was 5.4%, 2.8%, and 0.83% for brainstem in brain cases; 8.2%, 4.4%, and 1.9% for spinal cord in H&N; and 10.8%, 6.2%, and 2.1% for rectum in prostate cases, respectively. The average changes in dose followed a linear

  1. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    NASA Astrophysics Data System (ADS)

    Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

    2011-09-01

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  2. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  3. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  4. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  5. Near field 3D scene simulation for passive microwave imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wu, Ji

    2006-10-01

    Scene simulation is a necessary work in near field passive microwave remote sensing. A 3-D scene simulation model of microwave radiometric imaging based on ray tracing method is present in this paper. The essential influencing factors and general requirements are considered in this model such as the rough surface radiation, the sky radiation witch act as the uppermost illuminator in out door circumstance, the polarization rotation of the temperature rays caused by multiple reflections, and the antenna point spread function witch determines the resolution of the model final outputs. Using this model we simulate a virtual scene and analyzed the appeared microwave radiometric phenomenology, at last two real scenes of building and airstrip were simulated for validating the model. The comparison between the simulation and field measurements indicates that this model is completely feasible in practice. Furthermore, we analyzed the signatures of model outputs, and achieved some underlying phenomenology of microwave radiation witch is deferent with that in optical and infrared bands.

  6. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

  7. Multi Length Scale Imaging of Flocculated Estuarine Sediments; Insights into their Complex 3D Structure

    NASA Astrophysics Data System (ADS)

    Wheatland, Jonathan; Bushby, Andy; Droppo, Ian; Carr, Simon; Spencer, Kate

    2015-04-01

    Suspended estuarine sediments form flocs that are compositionally complex, fragile and irregularly shaped. The fate and transport of suspended particulate matter (SPM) is determined by the size, shape, density, porosity and stability of these flocs and prediction of SPM transport requires accurate measurements of these three-dimensional (3D) physical properties. However, the multi-scaled nature of flocs in addition to their fragility makes their characterisation in 3D problematic. Correlative microscopy is a strategy involving the spatial registration of information collected at different scales using several imaging modalities. Previously, conventional optical microscopy (COM) and transmission electron microscopy (TEM) have enabled 2-dimensional (2D) floc characterisation at the gross (> 1 µm) and sub-micron scales respectively. Whilst this has proven insightful there remains a critical spatial and dimensional gap preventing the accurate measurement of geometric properties and an understanding of how structures at different scales are related. Within life sciences volumetric imaging techniques such as 3D micro-computed tomography (3D µCT) and focused ion beam scanning electron microscopy [FIB-SEM (or FIB-tomography)] have been combined to characterise materials at the centimetre to micron scale. Combining these techniques with TEM enables an advanced correlative study, allowing material properties across multiple spatial and dimensional scales to be visualised. The aims of this study are; 1) to formulate an advanced correlative imaging strategy combining 3D µCT, FIB-tomography and TEM; 2) to acquire 3D datasets; 3) to produce a model allowing their co-visualisation; 4) to interpret 3D floc structure. To reduce the chance of structural alterations during analysis samples were first 'fixed' in 2.5% glutaraldehyde/2% formaldehyde before being embedding in Durcupan resin. Intermediate steps were implemented to improve contrast and remove pore water, achieved by the

  8. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  9. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  10. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  11. Biomechanically driven registration of pre- to intra-operative 3D images for laparoscopic surgery.

    PubMed

    Oktay, Ozan; Zhang, Li; Mansi, Tommaso; Mountney, Peter; Mewes, Philip; Nicolau, Stéphane; Soler, Luc; Chefd'hotel, Christophe

    2013-01-01

    Minimally invasive laparoscopic surgery is widely used for the treatment of cancer and other diseases. During the procedure, gas insufflation is used to create space for laparoscopic tools and operation. Insufflation causes the organs and abdominal wall to deform significantly. Due to this large deformation, the benefit of surgical plans, which are typically based on pre-operative images, is limited for real time navigation. In some recent work, intra-operative images, such as cone-beam CT or interventional CT, are introduced to provide updated volumetric information after insufflation. Other works in this area have focused on simulation of gas insufflation and exploited only the pre-operative images to estimate deformation. This paper proposes a novel registration method for pre- and intra-operative 3D image fusion for laparoscopic surgery. In this approach, the deformation of pre-operative images is driven by a biomechanical model of the insufflation process. The proposed method was validated by five synthetic data sets generated from clinical images and three pairs of in vivo CT scans acquired from two pigs, before and after insufflation. The results show the proposed method achieved high accuracy for both the synthetic and real insufflation data. PMID:24579117

  12. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  13. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  14. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  15. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  16. 3D optical coherence tomography image registration for guiding cochlear implant insertion

    NASA Astrophysics Data System (ADS)

    Cheon, Gyeong-Woo; Jeong, Hyun-Woo; Chalasani, Preetham; Chien, Wade W.; Iordachita, Iulian; Taylor, Russell; Niparko, John; Kang, Jin U.

    2014-03-01

    In cochlear implant surgery, an electrode array is inserted into the cochlear canal to restore hearing to a person who is profoundly deaf or significantly hearing impaired. One critical part of the procedure is the insertion of the electrode array, which looks like a thin wire, into the cochlear canal. Although X-ray or computed tomography (CT) could be used as a reference to evaluate the pathway of the whole electrode array, there is no way to depict the intra-cochlear canal and basal turn intra-operatively to help guide insertion of the electrode array. Optical coherent tomography (OCT) is a highly effective way of visualizing internal structures of cochlea. Swept source OCT (SSOCT) having center wavelength of 1.3 micron and 2D Galvonometer mirrors was used to achieve 7-mm depth 3-D imaging. Graphics processing unit (GPU), OpenGL, C++ and C# were integrated for real-time volumetric rendering simultaneously. The 3D volume images taken by the OCT system were assembled and registered which could be used to guide a cochlear implant. We performed a feasibility study using both dry and wet temporal bones and the result is presented.

  17. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  18. Deformable M-Reps for 3D Medical Image Segmentation.

    PubMed

    Pizer, Stephen M; Fletcher, P Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z; Fridman, Yonatan; Fritsch, Daniel S; Gash, Graham; Glotzer, John M; Jiroutek, Michael R; Lu, Conglin; Muller, Keith E; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L

    2003-11-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures - each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.

  19. Continuous table acquisition MRI for radiotherapy treatment planning: Distortion assessment with a new extended 3D volumetric phantom

    SciTech Connect

    Walker, Amy Metcalfe, Peter; Liney, Gary; Holloway, Lois; Dowling, Jason; Rivest-Henault, David

    2015-04-15

    Purpose: Accurate geometry is required for radiotherapy treatment planning (RTP). When considering the use of magnetic resonance imaging (MRI) for RTP, geometric distortions observed in the acquired images should be considered. While scanner technology and vendor supplied correction algorithms provide some correction, large distortions are still present in images, even when considering considerably smaller scan lengths than those typically acquired with CT in conventional RTP. This study investigates MRI acquisition with a moving table compared with static scans for potential geometric benefits for RTP. Methods: A full field of view (FOV) phantom (diameter 500 mm; length 513 mm) was developed for measuring geometric distortions in MR images over volumes pertinent to RTP. The phantom consisted of layers of refined plastic within which vitamin E capsules were inserted. The phantom was scanned on CT to provide the geometric gold standard and on MRI, with differences in capsule location determining the distortion. MRI images were acquired with two techniques. For the first method, standard static table acquisitions were considered. Both 2D and 3D acquisition techniques were investigated. With the second technique, images were acquired with a moving table. The same sequence was acquired with a static table and then with table speeds of 1.1 mm/s and 2 mm/s. All of the MR images acquired were registered to the CT dataset using a deformable B-spline registration with the resulting deformation fields providing the distortion information for each acquisition. Results: MR images acquired with the moving table enabled imaging of the whole phantom length while images acquired with a static table were only able to image 50%–70% of the phantom length of 513 mm. Maximum distortion values were reduced across a larger volume when imaging with a moving table. Increased table speed resulted in a larger contribution of distortion from gradient nonlinearities in the through

  20. 3D imaging of enzymes working in situ.

    PubMed

    Jamme, F; Bourquin, D; Tawil, G; Viksø-Nielsen, A; Buléon, A; Réfrégiers, M

    2014-06-01

    Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

  1. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  2. Auto-masked 2D/3D image registration and its validation with clinical cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Steininger, P.; Neuner, M.; Weichenberger, H.; Sharp, G. C.; Winey, B.; Kametriser, G.; Sedlmayer, F.; Deutschmann, H.

    2012-07-01

    Image-guided alignment procedures in radiotherapy aim at minimizing discrepancies between the planned and the real patient setup. For that purpose, we developed a 2D/3D approach which rigidly registers a computed tomography (CT) with two x-rays by maximizing the agreement in pixel intensity between the x-rays and the corresponding reconstructed radiographs from the CT. Moreover, the algorithm selects regions of interest (masks) in the x-rays based on 3D segmentations from the pre-planning stage. For validation, orthogonal x-ray pairs from different viewing directions of 80 pelvic cone-beam CT (CBCT) raw data sets were used. The 2D/3D results were compared to corresponding standard 3D/3D CBCT-to-CT alignments. Outcome over 8400 2D/3D experiments showed that parametric errors in root mean square were <0.18° (rotations) and <0.73 mm (translations), respectively, using rank correlation as intensity metric. This corresponds to a mean target registration error, related to the voxels of the lesser pelvis, of <2 mm in 94.1% of the cases. From the results we conclude that 2D/3D registration based on sequentially acquired orthogonal x-rays of the pelvis is a viable alternative to CBCT-based approaches if rigid alignment on bony anatomy is sufficient, no volumetric intra-interventional data set is required and the expected error range fits the individual treatment prescription.

  3. Computed optical interferometric tomography for high-speed volumetric cellular imaging

    PubMed Central

    Liu, Yuan-Zhi; Shemonski, Nathan D.; Adie, Steven G.; Ahmad, Adeel; Bower, Andrew J.; Carney, P. Scott; Boppart, Stephen A.

    2014-01-01

    Three-dimensional high-resolution imaging methods are important for cellular-level research. Optical coherence microscopy (OCM) is a low-coherence-based interferometry technology for cellular imaging with both high axial and lateral resolution. Using a high-numerical-aperture objective, OCM normally has a shallow depth of field and requires scanning the focus through the entire region of interest to perform volumetric imaging. With a higher-numerical-aperture objective, the image quality of OCM is affected by and more sensitive to aberrations. Interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO) are computed imaging techniques that overcome the depth-of-field limitation and the effect of optical aberrations in optical coherence tomography (OCT), respectively. In this work we combine OCM with ISAM and CAO to achieve high-speed volumetric cellular imaging. Experimental imaging results of ex vivo human breast tissue, ex vivo mouse brain tissue, in vitro fibroblast cells in 3D scaffolds, and in vivo human skin demonstrate the significant potential of this technique for high-speed volumetric cellular imaging. PMID:25401012

  4. Development and Calibration of New 3-D Vector VSP Imaging Technology: Vinton Salt Dome, LA

    SciTech Connect

    Kurt J. Marfurt; Hua-Wei Zhou; E. Charlotte Sullivan

    2004-09-01

    dense well control available at Vinton Dome. To more accurately estimate velocities, we developed a 3-D turning wave tomography algorithm adapted to the VSP geometry employed at Vinton Dome. We were able to determine that there is about 10% anisotropy at Vinton Dome, with the axis of transverse isotropy perpendicular to the geologic formations deformed by the diapirism. At the time of this final report, we have not yet integrated traveltimes from the surface data into the tomographic inversion to better constrain the velocity model, nor developed an anisotropic migration algorithm to image the 3-D 3-C VSP (objectives well beyond the original scope of the project). As a secondary objective, we developed a suite of new 3-D volumetric attribute algorithms and image enhancement algorithms. We estimate volumetric dip and azimuth using a multiwindow Kuwahara approach that avoids smoothing amplitude and dip across faults.

  5. Hybrid Multiphoton Volumetric Functional Imaging of Large Scale Bioengineered Neuronal Networks

    PubMed Central

    Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-01-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bio-engineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes/sec of structures with mm-scale dimensions containing a network of over 1000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances. PMID:24898000

  6. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  7. Accurate and automated image segmentation of 3D optical coherence tomography data suffering from low signal-to-noise levels.

    PubMed

    Su, Rong; Ekberg, Peter; Leitner, Michael; Mattsson, Lars

    2014-12-01

    Optical coherence tomography (OCT) has proven to be a useful tool for investigating internal structures in ceramic tapes, and the technique is expected to be important for roll-to-roll manufacturing. However, because of high scattering in ceramic materials, noise and speckles deteriorate the image quality, which makes automated quantitative measurements of internal interfaces difficult. To overcome this difficulty we present in this paper an innovative image analysis approach based on volumetric OCT data. The engine in the analysis is a 3D image processing and analysis algorithm. It is dedicated to boundary segmentation and dimensional measurement in volumetric OCT images, and offers high accuracy, efficiency, robustness, subpixel resolution, and a fully automated operation. The method relies on the correlation property of a physical interface and effectively eliminates pixels caused by noise and speckles. The remaining pixels being stored are the ones confirmed to be related to the target interfaces. Segmentation of tilted and curved internal interfaces separated by ∼10  μm in the Z direction is demonstrated. The algorithm also extracts full-field top-view intensity maps of the target interfaces for high-accuracy measurements in the X and Y directions. The methodology developed here may also be adopted in other similar 3D imaging and measurement technologies, e.g., ultrasound imaging, and for various materials. PMID:25606743

  8. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  9. 3D image analysis of a volcanic deposit

    NASA Astrophysics Data System (ADS)

    de Witte, Y.; Vlassenbroeck, J.; Vandeputte, K.; Dewanckele, J.; Cnudde, V.; van Hoorebeke, L.; Ernst, G.; Jacobs, P.

    2009-04-01

    During the last decades, X-ray micro CT has become a well established technique for non-destructive testing in a wide variety of research fields. Using a series of X-ray transmission images of the sample at different projection angles, a stack of 2D cross-sections is reconstructed, resulting in a 3D volume representing the X-ray attenuation coefficients of the sample. Since the attenuation coefficient of a material depends on its density and atomic number, this volume provides valuable information about the internal structure and composition of the sample. Although much qualitative information can be derived directly from this 3D volume, researchers usually require more quantitative results to be able to provide a full characterization of the sample under investigation. This type of information needs to be retrieved using specialized image processing software. For most samples, it is imperative that this processing is performed on the 3D volume as a whole, since a sequence of 2D cross sections usually forms an inadequate approximation of the actual structure. The complete processing of a volume consists of three sequential steps. First, the volume is segmented into a set of objects. What these objects represent depends on what property of the sample needs to be analysed. The objects can be for instance concavities, dense inclusions or the matrix of the sample. When dealing with noisy data, it might be necessary to filter the data before applying the segmentation. The second step is the separation of connected objects into a set of smaller objects. This is necessary when objects appear to be connected because of the limited resolution and contrast of the scan. Separation can also be useful when the sample contains a network structure and one wants to study the individual cells of the network. The third and last step consists of the actual analysis of the various objects to derive the different parameters of interest. While some parameters require extensive

  10. Optical Microangiography: A Label Free 3D Imaging Technology to Visualize and Quantify Blood Circulations within Tissue Beds in vivo

    PubMed Central

    Wang, Ruikang K

    2009-01-01

    Optical microangiography (OMAG) is a recently developed volumetric imaging technique that is capable of producing 3D images of dynamic blood perfusion within microcirculatory tissue beds in vivo. The imaging contrast of OMAG image is based on the intrinsic optical scattering signals backscattered by the moving blood cells in patent blood vessels, thus it is a label free imaging technique. In this paper, I will first discuss its recent developments that use a constant modulation frequency introduced in the spectral interferograms to achieve the blood perfusion imaging. I will then introduce its latest development that utilizes the inherent blood flow to modulate the spectral interferograms to realize the blood perfusion imaging. Finally, examples of using OMAG to delineate the dynamic blood perfusion, down to capillary level resolution, within living tissues are given, including cortical blood perfusion in the brain of small animals and blood flow within human retina and choroids. PMID:20657761

  11. Optical Microangiography: A Label Free 3D Imaging Technology to Visualize and Quantify Blood Circulations within Tissue Beds in vivo.

    PubMed

    Wang, Ruikang K

    2010-05-01

    Optical microangiography (OMAG) is a recently developed volumetric imaging technique that is capable of producing 3D images of dynamic blood perfusion within microcirculatory tissue beds in vivo. The imaging contrast of OMAG image is based on the intrinsic optical scattering signals backscattered by the moving blood cells in patent blood vessels, thus it is a label free imaging technique. In this paper, I will first discuss its recent developments that use a constant modulation frequency introduced in the spectral interferograms to achieve the blood perfusion imaging. I will then introduce its latest development that utilizes the inherent blood flow to modulate the spectral interferograms to realize the blood perfusion imaging. Finally, examples of using OMAG to delineate the dynamic blood perfusion, down to capillary level resolution, within living tissues are given, including cortical blood perfusion in the brain of small animals and blood flow within human retina and choroids.

  12. 3D imaging of nanomaterials by discrete tomography.

    PubMed

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively. PMID:19269094

  13. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  14. In vivo real-time volumetric synthetic aperture ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Bouzari, Hamed; Rasmussen, Morten F.; Brandt, Andreas H.; Stuart, Matthias B.; Nikolov, Svetoslav; Jensen, Jørgen A.

    2015-03-01

    Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological. This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° × 90° field-of-view was achieved. data were obtained using a 3.5 MHz 32 × 32 elements 2-D phased array transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak-temporal-average intensity for parallel beam-forming (PB) are 0.83 and 377.5mW/cm2, and for SA are 0.48 and 329.5mW/cm2. A human kidney was volumetrically imaged with SA and PB techniques simultaneously. Two radiologists for evaluation of the volumetric SA were consulted by means of a questionnaire on the level of details perceivable in the beam-formed images. The comparison was against PB based on the in vivo data. The feedback from the domain experts indicates that volumetric SA images internal body structures with a better contrast resolution compared to PB at all positions in the entire imaged volume. Furthermore, the autocovariance of a homogeneous area in the in vivo SA data, had 23.5% smaller width at the half of its maximum value compared to PB.

  15. 3D Imaging of Transition Metals in the Zebrafish Embryo by X-ray Fluorescence Microtomography

    PubMed Central

    Bourassa, Daisy; Gleber, Sophie-Charlotte; Vogt, Stefan; Yi, Hong; Will, Fabian; Richter, Heiko; Shin, Chong Hyun; Fahrni, Christoph J.

    2014-01-01

    Synchrotron X-ray fluorescence (SXRF) microtomography has emerged as a powerful technique for the 3D visualization of the elemental distribution in biological samples. The mechanical stability, both of the instrument and the specimen, is paramount when acquiring tomographic projection series. By combining the progressive lowering of temperature method (PLT) with femtosecond laser sectioning, we were able to embed, excise, and preserve a zebrafish embryo at 24 hours post fertilization in an X-ray compatible, transparent resin for tomographic elemental imaging. Based on a data set comprised of 60 projections, acquired with a step size of 2 μm during 100 hours of beam time, we reconstructed the 3D distribution of zinc, iron, and copper using the iterative maximum likelihood expectation maximization (MLEM) reconstruction algorithm. The volumetric elemental maps, which entail over 124 million individual voxels for each transition metal, revealed distinct elemental distributions that could be correlated with characteristic anatomical features at this stage of embryonic development. PMID:24992831

  16. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  17. Deformable M-Reps for 3D Medical Image Segmentation

    PubMed Central

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

    2013-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID

  18. [Initial research of one-beam pumping up-conversion 3D volumetric display based on Er:ZBLAN glass].

    PubMed

    Chen, Xiao-bo; Li, Mei-xian; Wen, Ou; Zhang, Fu-chu; Song, Zeng-fu

    2003-06-01

    This paper investigates one-beam pumping up-conversion three-dimensional volumetric display, which is based on a Er:ZBLAN fluoride glass. The light-length of the facula of one-beam up-conversion luminescence was studied by a 966 nm semiconductor laser. The up-conversion luminescence spectrum was also obtained. It was found that the property of one-beam pumping three-dimensional volumetric display can be improved significantly by 1.52 microns LD laser multi-photon up-conversion, this finding has not been reported.

  19. Clinical applications of 2D and 3D CT imaging of the airways--a review.

    PubMed

    Salvolini, L; Bichi Secchi, E; Costarelli, L; De Nicola, M

    2000-04-01

    Hardware and software evolution has broadened the possibilities of 2D and 3D reformatting of spiral CT and MR data set. In the study of the thorax, intrinsic benefits of volumetric CT scanning and better quality of reconstructed images offer us the possibility to apply additional rendering techniques to everyday clinical practice. Considering the large number and redundancy of possible post-processing imaging techniques that we can apply to raw CT sections data, it is necessary to precisely set a well-defined number of clinical applications of each of them, by careful evaluation of their benefits and possible pitfalls in each clinical setting. In diagnostic evaluation of pathological processes affecting the airways, a huge number of thin sections is necessary for detailed appraisal and has to be evaluated, and information must then be transferred to referring clinicians. By additional rendering it is possible to make image evaluation and data transfer easier, faster, and more effective. In the study of central airways, additional rendering can be of interest for precise evaluation of the length, morphology, and degree of stenoses. It may help in depicting exactly the locoregional extent of central tumours by better display of relations with bronchovascular interfaces and can increase CT/bronchoscopy sinergy. It may allow closer radiotherapy planning and better depiction of air collections, and, finally, it could ease panoramic evaluation of the results of dynamic or functional studies, that are made possible by increased speed of spiral scanning. When applied to the evaluation of peripheral airways, as a completion to conventional HRCT scans, High-Resolution Volumetric CT, by projection slabs applied to target areas of interest, can better depict the profusion and extension of affected bronchial segments in bronchiectasis, influence the choice of different approaches for tissue sampling by better evaluation of the relations of lung nodules with the airways, or help

  20. Statistical Inverse Ray Tracing for Image-Based 3D Modeling.

    PubMed

    Liu, Shubao; Cooper, David B

    2014-10-01

    This paper proposes a new formulation and solution to image-based 3D modeling (aka "multi-view stereo") based on generative statistical modeling and inference. The proposed new approach, named statistical inverse ray tracing, models and estimates the occlusion relationship accurately through optimizing a physically sound image generation model based on volumetric ray tracing. Together with geometric priors, they are put together into a Bayesian formulation known as Markov random field (MRF) model. This MRF model is different from typical MRFs used in image analysis in the sense that the ray clique, which models the ray-tracing process, consists of thousands of random variables instead of two to dozens. To handle the computational challenges associated with large clique size, an algorithm with linear computational complexity is developed by exploiting, using dynamic programming, the recursive chain structure of the ray clique. We further demonstrate the benefit of exact modeling and accurate estimation of the occlusion relationship by evaluating the proposed algorithm on several challenging data sets.

  1. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  2. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  3. Volumetric depth peeling for medical image display

    NASA Astrophysics Data System (ADS)

    Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.

    2006-01-01

    Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.

  4. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  5. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  6. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  7. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  8. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  9. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  10. Analytical study of the effect of collimation on the performance of PET cameras in 3-D imaging

    SciTech Connect

    Maze, A.; Lecomte, R. )

    1990-04-01

    This paper presents an analytical model developed to evaluate the 3-d imaging performance of cylindrical PET systems with the multi-slice and the volumetric configurations. Point source event rates for singles and for true and scattered coincidences were obtained by numerical integration of the equations for probability of incidence on the detectors. Data were subsequently combined to extract the sensitivity and background (scatters + accidentals) count rates for various source dimensions and activity densities. Event rates before detection were evaluated and an ideal tomograph geometry was assumed in order to investigate the fundamental properties inherent to the two tomograph designs. For small sources placed in a scattering cylinder, results indicate higher scatter and accidental fractions in the volumetric system.

  11. Sub-Nyquist Sampling and Fourier Domain Beamforming in Volumetric Ultrasound Imaging.

    PubMed

    Burshtein, Amir; Birk, Michael; Chernyakova, Tanya; Eilam, Alon; Kempinski, Arcady; Eldar, Yonina C

    2016-05-01

    A key step in ultrasound image formation is digital beamforming of signals sampled by several transducer elements placed upon an array. High-resolution digital beamforming introduces the demand for sampling rates significantly higher than the signals' Nyquist rate, which greatly increases the volume of data that must be transmitted from the system's front end. In 3-D ultrasound imaging, 2-D transducer arrays rather than 1-D arrays are used, and more scan lines are needed. This implies that the amount of sampled data is vastly increased with respect to 2-D imaging. In this work, we show that a considerable reduction in data rate can be achieved by applying the ideas of Xampling and frequency domain beamforming (FDBF), leading to a sub-Nyquist sampling rate, which uses only a portion of the bandwidth of the ultrasound signals to reconstruct the image. We extend previous work on FDBF for 2-D ultrasound imaging to accommodate the geometry imposed by volumetric scanning and a 2-D grid of transducer elements. High image quality from low-rate samples is demonstrated by simulation of a phantom image composed of several small reflectors. Our technique is then applied to raw data of a heart ventricle phantom obtained by a commercial 3-D ultrasound system. We show that by performing 3-D beamforming in the frequency domain, sub-Nyquist sampling and low processing rate are achievable, while maintaining adequate image quality.

  12. Imaging the Juan de Fuca subduction plate using 3D Kirchoff Prestack Depth Migration

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Bodin, T.; Allen, R. M.; Tauzin, B.

    2014-12-01

    We propose a new Receiver Function migration method to image the subducting plate in the western US that utilizes the US array and regional network data. While the well-developed CCP (common conversion point) poststack migration is commonly used for such imaging; our method applies a 3D prestack depth migration approach. The traditional CCP and post-stack depth mapping approaches implement the ray tracing and moveout correction for the incoming teleseismic plane wave based on a 1D earth reference model and the assumption of horizontal discontinuities. Although this works well in mapping the reflection position of relatively flat discontinuities (such as the Moho or the LAB), CCP is known to give poor results in the presence of lateral volumetric velocity variations and dipping layers. Instead of making the flat layer assumption and 1D moveout correction, seismic rays are traced in a 3D tomographic model with the Fast Marching Method. With travel time information stored, our Kirchoff migration is done where the amplitude of the receiver function at a given time is distributed over all possible conversion points (i.e. along a semi-elipse) on the output migrated depth section. The migrated reflectors will appear where the semicircles constructively interfere, whereas destructive interference will cancel out noise. Synthetic tests show that in the case of a horizontal discontinuity, the prestack Kirchoff migration gives similar results to CCP, but without spurious multiples as this energy is stacked destructively and cancels out. For 45 degree and 60 degree dipping discontinuities, it also performs better in terms of imaging at the right boundary and dip angle. This is especially useful in the Western US case, beneath which the Juan de Fuca plate subducted to ~450km with a dipping angle that may exceed 50 degree. While the traditional CCP method will underestimate the dipping angle, our proposed imaging method will provide an accurate 3D subducting plate image without

  13. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  14. Free segmentation in rendered 3D images through synthetic impulse response in integral imaging

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.

    2016-06-01

    Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.

  15. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  16. Imaging 3D strain field monitoring during hydraulic fracturing processes

    NASA Astrophysics Data System (ADS)

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  17. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  18. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  19. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  20. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  1. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction.

    PubMed

    Min, Junhong; Holden, Seamus J; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-11-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum.

  2. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  3. Three-dimensional volumetric display of CT data: effect of scan parameters upon image quality.

    PubMed

    Ney, D R; Fishman, E K; Magid, D; Robertson, D D; Kawashima, A

    1991-01-01

    Of the many steps involved in producing high quality three-dimensional (3D) images of CT data, the data acquisition step is of greatest consequence. The principle of "garbage in, garbage out" applies to 3D imaging--bad scanning technique produces equally bad 3D images. We present a formal study of the effect of two basic scanning parameters, slice thickness and slice spacing, on image quality. Three standard test objects were studied using variable CT scanning parameters. The objects chosen were a bone phantom, a cadaver femur with a simulated 5 mm fracture gap, and a cadaver femur with a simulated 1 mm fracture gap. Each object was scanned at three collimations: 8, 4, and 2 mm. For each collimation, four sets of scans were performed using four slice intervals: 8, 4, 3, and 2 mm. The bone phantom was scanned in two positions: oriented perpendicular to the scanning plane and oriented 45 degrees from the scanning plane. Three-dimensional images of the resulting 48 sets of data were produced using volumetric rendering. Blind review of the resultant 48 data sets was performed by three reviewers rating five factors for each image. The images resulting from scans with thin collimation and small table increments proved to rate the highest in all areas. The data obtained using 2 mm slice intervals proved to rate the highest in perceived image quality. Three millimeter slice spacing with 4 mm collimation, which clinically provides a good compromise between image quality and acquisition time and dose, also produced good perceived image quality. The studies with 8 mm slice intervals provided the least detail and introduced the worst inaccuracies and artifacts and were not suitable for clinical use. Statistical analysis demonstrated that slice interval (i.e., table incrementation) was of primary importance and slice collimation was of secondary, although significant, importance in determining perceived 3D image quality.

  4. Lensfree diffractive tomography for the imaging of 3D cell cultures

    PubMed Central

    Momey, F.; Berdeu, A.; Bordy, T.; Dinten, J.-M.; Marcel, F. Kermarrec; Picollet-D’hahan, N.; Gidrol, X.; Allier, C.

    2016-01-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm3 of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  5. Lensfree diffractive tomography for the imaging of 3D cell cultures.

    PubMed

    Momey, F; Berdeu, A; Bordy, T; Dinten, J-M; Marcel, F Kermarrec; Picollet-D'hahan, N; Gidrol, X; Allier, C

    2016-03-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm (3) of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  6. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  7. Monopulse radar 3-D imaging and application in terminal guidance radar

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Qin, Guodong; Zhang, Lina

    2007-11-01

    Monopulse radar 3-D imaging integrates ISAR, monopulse angle measurement and 3-D imaging processing to obtain the 3-D image which can reflect the real size of a target, which means any two of the three measurement parameters, namely azimuth difference beam elevation difference beam and radial range, can be used to form 3-D image of 3-D object. The basic principles of Monopulse radar 3-D imaging are briefly introduced, the effect of target carriage changes(including yaw, pitch, roll and movement of target itself) on 3-D imaging and 3-D moving compensation based on the chirp rate μ and Doppler frequency f d are analyzed, and the application of monopulse radar 3-D imaging to terminal guidance radars is forecasted. The computer simulation results show that monopulse radar 3-D imaging has apparent advantages in distinguishing a target from overside interference and precise assault on vital part of a target, and has great importance in terminal guidance radars.

  8. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study

    PubMed Central

    Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  9. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study

    PubMed Central

    Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise.

  10. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study.

    PubMed

    Nomura, Kosuke; Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise.

  11. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study.

    PubMed

    Nomura, Kosuke; Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  12. Rapidly-steered single-element ultrasound for real-time volumetric imaging and guidance

    NASA Astrophysics Data System (ADS)

    Stauber, Mark; Western, Craig; Solek, Roman; Salisbury, Kenneth; Hristov, Dmitre; Schlosser, Jeffrey

    2016-03-01

    Volumetric ultrasound (US) imaging has the potential to provide real-time anatomical imaging with high soft-tissue contrast in a variety of diagnostic and therapeutic guidance applications. However, existing volumetric US machines utilize "wobbling" linear phased array or matrix phased array transducers which are costly to manufacture and necessitate bulky external processing units. To drastically reduce cost, improve portability, and reduce footprint, we propose a rapidly-steered single-element volumetric US imaging system. In this paper we explore the feasibility of this system with a proof-of-concept single-element volumetric US imaging device. The device uses a multi-directional raster-scan technique to generate a series of two-dimensional (2D) slices that were reconstructed into three-dimensional (3D) volumes. At 15 cm depth, 90° lateral field of view (FOV), and 20° elevation FOV, the device produced 20-slice volumes at a rate of 0.8 Hz. Imaging performance was evaluated using an US phantom. Spatial resolution was 2.0 mm, 4.7 mm, and 5.0 mm in the axial, lateral, and elevational directions at 7.5 cm. Relative motion of phantom targets were automatically tracked within US volumes with a mean error of -0.3+/-0.3 mm, -0.3+/-0.3 mm, and -0.1+/-0.5 mm in the axial, lateral, and elevational directions, respectively. The device exhibited a mean spatial distortion error of 0.3+/-0.9 mm, 0.4+/-0.7 mm, and -0.3+/-1.9 in the axial, lateral, and elevational directions. With a production cost near $1000, the performance characteristics of the proposed system make it an ideal candidate for diagnostic and image-guided therapy applications where form factor and low cost are paramount.

  13. Optimization of element length for imaging small volumetric reflectors with linear ultrasonic arrays

    NASA Astrophysics Data System (ADS)

    Barber, T. S.; Wilcox, P. D.; Nixon, A. D.

    2016-02-01

    A 3D ultrasonic simulation study is presented, aimed at understanding the effect of element length for imaging small volumetric flaws with linear arrays in ultrasonically noisy materials. The geometry of a linear array can be described by the width, pitch and total number of the elements along with the length perpendicular to imaging plane. This paper is concerned with the latter parameter, which tends to be ignored in array optimization studies and is often chosen arbitrarily for industrial array inspections. A 3D analytical model based on imaging a point target is described, validated and used to make calculations of relative Signal-to-Noise Ratio (SNR) as a function of element length. SNR is found to be highly sensitive to element length with a 12dB variation observed over the length range investigated. It is then demonstrated that the optimal length can be predicted directly from the Point Spread Function (PSF) of the imaging system as well as the natural focal point of the array element from 2D beam profiles perpendicular to the imaging plane. This result suggests that the optimal length for any imaging position can be predicted without the need for a full 3D model and is independent of element pitch and the number of elements. Array element design guidelines are then described with respect to wavelength and extensions of these results are discussed for application to realistically-sized defects and coarse-grained materials.

  14. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  15. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  16. Treatment of left sided breast cancer for a patient with funnel chest: Volumetric-modulated arc therapy vs. 3D-CRT and intensity-modulated radiotherapy

    SciTech Connect

    Haertl, Petra M.; Pohl, Fabian; Weidner, Karin; Groeger, Christian; Koelbl, Oliver; Dobler, Barbara

    2013-04-01

    This case study presents a rare case of left-sided breast cancer in a patient with funnel chest, which is a technical challenge for radiation therapy planning. To identify the best treatment technique for this case, 3 techniques were compared: conventional tangential fields (3D conformal radiotherapy [3D-CRT]), intensity-modulated radiotherapy (IMRT), and volumetric-modulated arc therapy (VMAT). The plans were created for a SynergyS® (Elekta, Ltd, Crawley, UK) linear accelerator with a BeamModulator™ head and 6-MV photons. The planning system was Oncentra Masterplan® v3.3 SP1 (Nucletron BV, Veenendal, Netherlands). Calculations were performed with collapsed cone algorithm. Dose prescription was 50.4 Gy to the average of the planning target volume (PTV). PTV coverage and homogeneity was comparable for all techniques. VMAT allowed reducing dose to the ipsilateral organs at risk (OAR) and the contralateral breast compared with IMRT and 3D-CRT: The volume of the left lung receiving 20 Gy was 19.3% for VMAT, 26.1% for IMRT, and 32.4% for 3D-CRT. In the heart, a D{sub 15%} of 9.7 Gy could be achieved with VMAT compared with 14 Gy for IMRT and 46 Gy for 3D-CRT. In the contralateral breast, D{sub 15%} was 6.4 Gy for VMAT, 8.8 Gy for IMRT, and 10.2 Gy for 3D-CRT. In the contralateral lung, however, the lowest dose was achieved with 3D-CRT with D{sub 10%} of 1.7 Gy for 3D-CRT, and 6.7 Gy for both IMRT and VMAT. The lowest number of monitor units (MU) per 1.8-Gy fraction was required by 3D-CRT (192 MU) followed by VMAT (518 MU) and IMRT (727 MU). Treatment time was similar for 3D-CRT (3 min) and VMAT (4 min) but substantially increased for IMRT (13 min). VMAT is considered the best treatment option for the presented case of a patient with funnel chest. It allows reducing dose in most OAR without compromising target coverage, keeping delivery time well below 5 minutes.

  17. Image enhancement and segmentation of fluid-filled structures in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Dudycha, Stephen; McMorrow, Gerald

    2003-05-01

    Segmentation of fluid-filled structures, such as the urinary bladder, from three-dimensional ultrasound images is necessary for measuring their volume. This paper describes a system for image enhancement, segmentation and volume measurement of fluid-filled structures on 3D ultrasound images. The system was applied for the measurement of urinary bladder volume. Results show an average error of less than 10% in the estimation of the total bladder volume.

  18. 3D Image Reconstructions and the Nyquist-Shannon Theorem

    NASA Astrophysics Data System (ADS)

    Ficker, T.; Martišek, D.

    2015-09-01

    Fracture surfaces are occasionally modelled by Fourier's two-dimensional series that can be converted into digital 3D reliefs mapping the morphology of solid surfaces. Such digital replicas may suffer from various artefacts when processed inconveniently. Spatial aliasing is one of those artefacts that may devalue Fourier's replicas. According to the Nyquist-Shannon sampling theorem the spatial aliasing occurs when Fourier's frequencies exceed the Nyquist critical frequency. In the present paper it is shown that the Nyquist frequency is not the only critical limit determining aliasing artefacts but there are some other frequencies that intensify aliasing phenomena and form an infinite set of points at which numerical results abruptly and dramatically change their values. This unusual type of spatial aliasing is explored and some consequences for 3D computer reconstructions are presented.

  19. Volumetric imaging with an amplitude-steered array.

    PubMed

    Frazier, Catherine H; Hughes, W Jack; O'Brien, William D

    2002-12-01

    Volumetric acoustic imaging is desirable for the visualization of underwater objects and structures; however, the implementation of a volumetric imaging system is difficult due to the high channel count of a fully populated two-dimensional array. Recently, a linear amplitude-steered array with a reduced electronics requirement was presented, which is capable of collecting a two-dimensional set of data with a single transmit pulse. In this study, we demonstrate the use of the linear amplitude-steered array and associated image formation algorithms for collecting and displaying volumetric data; that is, proof of principle of the amplitude-steering concept and the associated image formation algorithms is demonstrated. Range and vertical position are obtained by taking advantage of the frequency separation of a vertical linear amplitude-steered array. The third dimension of data is obtained by rotating the array such that the mainlobe is mechanically steered in azimuth. Data are collected in a water tank at the Pennsylvania State University Applied Research Laboratory for two targets: a ladder and three pipes. These data are the first experimental data collected with an amplitude-steered array for the purposes of imaging. The array is 10 cm in diameter and is operated in the frequency range of 80 to 304 kHz. Although the array is small for high-resolution imaging at these frequencies, the rungs of the ladder are recognizable in the images. The three pipes are difficult to discern in two of the projection images; however, the pipes separated in range are clear in the image showing vertical position versus range. The imaging concept is demonstrated on measured data, and the simulations agree well with the experimental results. PMID:12508995

  20. High-dose radiotherapy in inoperable nonsmall cell lung cancer: comparison of volumetric modulated arc therapy, dynamic IMRT and 3D conformal radiotherapy.

    PubMed

    Bree, Ingrid de; van Hinsberg, Mariëlle G E; van Veelen, Lieneke R

    2012-01-01

    Conformal 3D radiotherapy (3D-CRT) combined with chemotherapy for inoperable non-small cell lung cancer (NSCLC) to the preferable high dose is often not achievable because of dose-limiting organs. This reduces the probability of regional tumor control. Therefore, the surplus value of using intensity-modulated radiation therapy (IMRT) techniques, specifically volumetric modulated arc therapy (RapidArc [RA]) and dynamic IMRT (d-IMRT) has been investigated. RA and d-IMRT plans were compared with 3D-CRT treatment plans for 20 patients eligible for concurrent high-dose chemoradiotherapy, in whom a dose of 60 Gy was not achievable. Comparison of dose delivery in the target volume and organs at risk was carried out by evaluating 3D dose distributions and dose-volume histograms. Quality of the dose distribution was assessed using the inhomogeneity and conformity index. For most patients, a higher dose to the target volume can be delivered using RA or d-IMRT; in 15% of the patients a dose ≥60 Gy was possible. Both IMRT techniques result in a better conformity of the dose (p < 0.001). There are no significant differences in homogeneity of dose in the target volume. IMRT techniques for NSCLC patients allow higher dose to the target volume, thus improving regional tumor control. PMID:22459649

  1. The effect of CT scanner parameters and 3D volume rendering techniques on the accuracy of linear, angular, and volumetric measurements of the mandible

    PubMed Central

    Whyms, B.J.; Vorperian, H.K.; Gentry, L.R.; Schimek, E.M.; Bersu, E.T.; Chung, M.K.

    2013-01-01

    Objectives This study investigates the effect of scanning parameters on the accuracy of measurements from three-dimensional multi-detector computed tomography (3D-CT) mandible renderings. A broader range of acceptable parameters can increase the availability of CT studies for retrospective analysis. Study Design Three human mandibles and a phantom object were scanned using 18 combinations of slice thickness, field of view, and reconstruction algorithm and three different threshold-based segmentations. Measurements of 3D-CT models and specimens were compared. Results Linear and angular measurements were accurate, irrespective of scanner parameters or rendering technique. Volume measurements were accurate with a slice thickness of 1.25 mm, but not 2.5 mm. Surface area measurements were consistently inflated. Conclusions Linear, angular and volumetric measurements of mandible 3D-CT models can be confidently obtained from a range of parameters and rendering techniques. Slice thickness is the primary factor affecting volume measurements. These findings should also apply to 3D rendering using cone-beam-CT. PMID:23601224

  2. Computation of optimized arrays for 3-D electrical imaging surveys

    NASA Astrophysics Data System (ADS)

    Loke, M. H.; Wilkinson, P. B.; Uhlemann, S. S.; Chambers, J. E.; Oxby, L. S.

    2014-12-01

    3-D electrical resistivity surveys and inversion models are required to accurately resolve structures in areas with very complex geology where 2-D models might suffer from artefacts. Many 3-D surveys use a grid where the number of electrodes along one direction (x) is much greater than in the perpendicular direction (y). Frequently, due to limitations in the number of independent electrodes in the multi-electrode system, the surveys use a roll-along system with a small number of parallel survey lines aligned along the x-direction. The `Compare R' array optimization method previously used for 2-D surveys is adapted for such 3-D surveys. Offset versions of the inline arrays used in 2-D surveys are included in the number of possible arrays (the comprehensive data set) to improve the sensitivity to structures in between the lines. The array geometric factor and its relative error are used to filter out potentially unstable arrays in the construction of the comprehensive data set. Comparisons of the conventional (consisting of dipole-dipole and Wenner-Schlumberger arrays) and optimized arrays are made using a synthetic model and experimental measurements in a tank. The tests show that structures located between the lines are better resolved with the optimized arrays. The optimized arrays also have significantly better depth resolution compared to the conventional arrays.

  3. High sensitive volumetric imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography

    NASA Astrophysics Data System (ADS)

    Zhi, Zhongwei; Jung, Yeongri; Jia, Yali; An, Lin; Wang, Ruikang K.

    2011-03-01

    We present a non-invasive, label-free imaging technique called Ultrahigh Sensitive Optical Microangiography (UHSOMAG) for high sensitive volumetric imaging of renal microcirculation. The UHS-OMAG imaging system is based on spectral domain optical coherence tomography (SD-OCT), which uses a 47000 A-line scan rate CCD camera to perform an imaging speed of 150 frames per second that takes only ~7 seconds to acquire a 3D image. The technique, capable of measuring slow blood flow down to 4 um/s, is sensitive enough to image capillary networks, such as peritubular capillaries and glomerulus within renal cortex. We show superior performance of UHS-OMAG in providing depthresolved volumetric images of rich renal microcirculation. We monitored the dynamics of renal microvasculature during renal ischemia and reperfusion. Obvious reduction of renal microvascular density due to renal ischemia was visualized and quantitatively analyzed. This technique can be helpful for the assessment of chronic kidney disease (CKD) which relates to abnormal microvasculature.

  4. Automated bone segmentation from large field of view 3D MR images of the hip joint

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  5. Automated bone segmentation from large field of view 3D MR images of the hip joint.

    PubMed

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-21

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  6. Pulse sequence for dynamic volumetric imaging of hyperpolarized metabolic products

    NASA Astrophysics Data System (ADS)

    Cunningham, Charles H.; Chen, Albert P.; Lustig, Michael; Hargreaves, Brian A.; Lupo, Janine; Xu, Duan; Kurhanewicz, John; Hurd, Ralph E.; Pauly, John M.; Nelson, Sarah J.; Vigneron, Daniel B.

    2008-07-01

    Dynamic nuclear polarization and dissolution of a 13C-labeled substrate enables the dynamic imaging of cellular metabolism. Spectroscopic information is typically acquired, making the acquisition of dynamic volumetric data a challenge. To enable rapid volumetric imaging, a spectral-spatial excitation pulse was designed to excite a single line of the carbon spectrum. With only a single resonance present in the signal, an echo-planar readout trajectory could be used to resolve spatial information, giving full volume coverage of 32 × 32 × 16 voxels every 3.5 s. This high frame rate was used to measure the different lactate dynamics in different tissues in a normal rat model and a mouse model of prostate cancer.

  7. 3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning

    PubMed Central

    Yang, Xiaofeng; Fei, Baowei

    2012-01-01

    We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images. PMID:24027622

  8. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  9. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  10. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  11. Dual-color 3D superresolution microscopy by combined spectral-demixing and biplane imaging.

    PubMed

    Winterflood, Christian M; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-07-01

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color.

  12. [3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].

    PubMed

    Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu

    2015-08-01

    The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules.

  13. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  14. Advanced 2D-3D registration for endovascular aortic interventions: addressing dissimilarity in images

    NASA Astrophysics Data System (ADS)

    Demirci, Stefanie; Kutter, Oliver; Manstad-Hulaas, Frode; Bauernschmitt, Robert; Navab, Nassir

    2008-03-01

    In the current clinical workflow of minimally invasive aortic procedures navigation tasks are performed under 2D or 3D angiographic imaging. Many solutions for navigation enhancement suggest an integration of the preoperatively acquired computed tomography angiography (CTA) in order to provide the physician with more image information and reduce contrast injection and radiation exposure. This requires exact registration algorithms that align the CTA volume to the intraoperative 2D or 3D images. Additional to the real-time constraint, the registration accuracy should be independent of image dissimilarities due to varying presence of medical instruments and contrast agent. In this paper, we propose efficient solutions for image-based 2D-3D and 3D-3D registration that reduce the dissimilarities by image preprocessing, e.g. implicit detection and segmentation, and adaptive weights introduced into the registration procedure. Experiments and evaluations are conducted on real patient data.

  15. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  16. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  17. Infrared imaging of the polymer 3D-printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  18. 3D breast image registration--a review.

    PubMed

    Sivaramakrishna, Radhika

    2005-02-01

    Image registration is an important problem in breast imaging. It is used in a wide variety of applications that include better visualization of lesions on pre- and post-contrast breast MRI images, speckle tracking and image compounding in breast ultrasound images, alignment of positron emission, and standard mammography images on hybrid machines et cetera. It is a prerequisite to align images taken at different times to isolate small interval lesions. Image registration also has useful applications in monitoring cancer therapy. The field of breast image registration has gained considerable interest in recent years. While the primary focus of interest continues to be the registration of pre- and post-contrast breast MRI images, other areas like breast ultrasound registration have gained more attention in recent years. The focus of registration algorithms has also shifted from control point based semi-automated techniques, to more sophisticated voxel based automated techniques that use mutual information as a similarity measure. This paper visits the problem of breast image registration and provides an overview of the current state-of-the-art in this area. PMID:15649086

  19. 3D fluorescence anisotropy imaging using selective plane illumination microscopy

    PubMed Central

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-01-01

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  20. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets.

    PubMed

    Peng, Hanchuan; Ruan, Zongcai; Long, Fuhui; Simpson, Julie H; Myers, Eugene W

    2010-04-01

    The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3D-based application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a 3D digital atlas of neurite tracts in the fruitfly brain. PMID:20231818

  1. Two-dimensional and three-dimensional imaging of the in vivo lung: combining spiral computed tomography with multiplanar and volumetric rendering techniques.

    PubMed

    Kuhlman, J E; Ney, D R; Fishman, E K

    1994-02-01

    We applied multiplanar techniques and a modified version of our volumetric rendering program for three-dimensional imaging to single-breath hold spiral computed tomography (CT) datasets to generate two- and three-dimensional (2-D and 3-D) images of the in vivo lung. We report details of the combined 2-D/3-D spiral CT technique along with three representative cases from our initial experience.

  2. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    NASA Astrophysics Data System (ADS)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  3. Ellipsoid Segmentation Model for Analyzing Light-Attenuated 3D Confocal Image Stacks of Fluorescent Multi-Cellular Spheroids

    PubMed Central

    Barbier, Michaël; Jaensch, Steffen; Cornelissen, Frans; Vidic, Suzana; Gjerde, Kjersti; de Hoogt, Ronald; Graeser, Ralph; Gustin, Emmanuel; Chong, Yolanda T.

    2016-01-01

    In oncology, two-dimensional in-vitro culture models are the standard test beds for the discovery and development of cancer treatments, but in the last decades, evidence emerged that such models have low predictive value for clinical efficacy. Therefore they are increasingly complemented by more physiologically relevant 3D models, such as spheroid micro-tumor cultures. If suitable fluorescent labels are applied, confocal 3D image stacks can characterize the structure of such volumetric cultures and, for example, cell proliferation. However, several issues hamper accurate analysis. In particular, signal attenuation within the tissue of the spheroids prevents the acquisition of a complete image for spheroids over 100 micrometers in diameter. And quantitative analysis of large 3D image data sets is challenging, creating a need for methods which can be applied to large-scale experiments and account for impeding factors. We present a robust, computationally inexpensive 2.5D method for the segmentation of spheroid cultures and for counting proliferating cells within them. The spheroids are assumed to be approximately ellipsoid in shape. They are identified from information present in the Maximum Intensity Projection (MIP) and the corresponding height view, also known as Z-buffer. It alerts the user when potential bias-introducing factors cannot be compensated for and includes a compensation for signal attenuation. PMID:27303813

  4. Ellipsoid Segmentation Model for Analyzing Light-Attenuated 3D Confocal Image Stacks of Fluorescent Multi-Cellular Spheroids.

    PubMed

    Barbier, Michaël; Jaensch, Steffen; Cornelissen, Frans; Vidic, Suzana; Gjerde, Kjersti; de Hoogt, Ronald; Graeser, Ralph; Gustin, Emmanuel; Chong, Yolanda T

    2016-01-01

    In oncology, two-dimensional in-vitro culture models are the standard test beds for the discovery and development of cancer treatments, but in the last decades, evidence emerged that such models have low predictive value for clinical efficacy. Therefore they are increasingly complemented by more physiologically relevant 3D models, such as spheroid micro-tumor cultures. If suitable fluorescent labels are applied, confocal 3D image stacks can characterize the structure of such volumetric cultures and, for example, cell proliferation. However, several issues hamper accurate analysis. In particular, signal attenuation within the tissue of the spheroids prevents the acquisition of a complete image for spheroids over 100 micrometers in diameter. And quantitative analysis of large 3D image data sets is challenging, creating a need for methods which can be applied to large-scale experiments and account for impeding factors. We present a robust, computationally inexpensive 2.5D method for the segmentation of spheroid cultures and for counting proliferating cells within them. The spheroids are assumed to be approximately ellipsoid in shape. They are identified from information present in the Maximum Intensity Projection (MIP) and the corresponding height view, also known as Z-buffer. It alerts the user when potential bias-introducing factors cannot be compensated for and includes a compensation for signal attenuation. PMID:27303813

  5. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  6. In-vivo Optical Tomography of Small Scattering Specimens: time-lapse 3D imaging of the head eversion process in Drosophila melanogaster

    PubMed Central

    Arranz, Alicia; Dong, Di; Zhu, Shouping; Savakis, Charalambos; Tian, Jie; Ripoll, Jorge

    2014-01-01

    Even though in vivo imaging approaches have witnessed several new and important developments, specimens that exhibit high light scattering properties such as Drosophila melanogaster pupae are still not easily accessible with current optical imaging techniques, obtaining images only from subsurface features. This means that in order to obtain 3D volumetric information these specimens need to be studied either after fixation and a chemical clearing process, through an imaging window - thus perturbing physiological development -, or during early stages of development when the scattering contribution is negligible. In this paper we showcase how Optical Projection Tomography may be used to obtain volumetric images of the head eversion process in vivo in Drosophila melanogaster pupae, both in control and headless mutant specimens. Additionally, we demonstrate the use of Helical Optical Projection Tomography (hOPT) as a tool for high throughput 4D-imaging of several specimens simultaneously. PMID:25471694

  7. In-vivo optical tomography of small scattering specimens: time-lapse 3D imaging of the head eversion process in Drosophila melanogaster.

    PubMed

    Arranz, Alicia; Dong, Di; Zhu, Shouping; Savakis, Charalambos; Tian, Jie; Ripoll, Jorge

    2014-01-01

    Even though in vivo imaging approaches have witnessed several new and important developments, specimens that exhibit high light scattering properties such as Drosophila melanogaster pupae are still not easily accessible with current optical imaging techniques, obtaining images only from subsurface features. This means that in order to obtain 3D volumetric information these specimens need to be studied either after fixation and a chemical clearing process, through an imaging window--thus perturbing physiological development -, or during early stages of development when the scattering contribution is negligible. In this paper we showcase how Optical Projection Tomography may be used to obtain volumetric images of the head eversion process in vivo in Drosophila melanogaster pupae, both in control and headless mutant specimens. Additionally, we demonstrate the use of Helical Optical Projection Tomography (hOPT) as a tool for high throughput 4D-imaging of several specimens simultaneously. PMID:25471694

  8. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  9. Thin client performance for remote 3-D image display.

    PubMed

    Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin

    2003-01-01

    Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.

  10. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  11. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer.

  12. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer. PMID:26276956

  13. Review of three-dimensional (3D) surface imaging for oncoplastic, reconstructive and aesthetic breast surgery.

    PubMed

    O'Connell, Rachel L; Stevens, Roger J G; Harris, Paul A; Rusby, Jennifer E

    2015-08-01

    Three-dimensional surface imaging (3D-SI) is being marketed as a tool in aesthetic breast surgery. It has recently also been studied in the objective evaluation of cosmetic outcome of oncological procedures. The aim of this review is to summarise the use of 3D-SI in oncoplastic, reconstructive and aesthetic breast surgery. An extensive literature review was undertaken to identify published studies. Two reviewers independently screened all abstracts and selected relevant articles using specific inclusion criteria. Seventy two articles relating to 3D-SI for breast surgery were identified. These covered endpoints such as image acquisition, calculations and data obtainable, comparison of 3D and 2D imaging and clinical research applications of 3D-SI. The literature provides a favourable view of 3D-SI. However, evidence of its superiority over current methods of clinical decision making, surgical planning, communication and evaluation of outcome is required before it can be accepted into mainstream practice.

  14. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic.

  15. 3D flow visualization and tomographic particle image velocimetry for vortex breakdown over a non-slender delta wing

    NASA Astrophysics Data System (ADS)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2016-06-01

    Volumetric measurement for the leading-edge vortex (LEV) breakdown of a delta wing has been conducted by three-dimensional (3D) flow visualization and tomographic particle image velocimetry (TPIV). The 3D flow visualization is employed to show the vortex structures, which was recorded by four cameras with high resolution. 3D dye streaklines of the visualization are reconstructed using a similar way of particle reconstruction in TPIV. Tomographic PIV is carried out at the same time using same cameras with the dye visualization. Q criterion is employed to identify the LEV. Results of tomographic PIV agree well with the reconstructed 3D dye streaklines, which proves the validity of the measurements. The time-averaged flow field based on TPIV is shown and described by sections of velocity and streamwise vorticity. Combining the two measurement methods sheds light on the complex structures of both bubble type and spiral type of breakdown. The breakdown position is recognized by investigating both the streaklines and TPIV velocity fields. Proper orthogonal decomposition is applied to extract a pair of conjugated helical instability modes from TPIV data. Therefore, the dominant frequency of the instability modes is obtained from the corresponding POD coefficients of the modes based on wavelet transform analysis.

  16. Light sheet adaptive optics microscope for 3D live imaging

    NASA Astrophysics Data System (ADS)

    Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

    2013-02-01

    We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

  17. Combining terrestrial stereophotogrammetry, DGPS and GIS-based 3D voxel modelling in the volumetric recording of archaeological features

    NASA Astrophysics Data System (ADS)

    Orengo, Hector A.

    2013-02-01

    Archaeological recording of structures and excavations in high mountain areas is greatly hindered by the scarce availability of both space, to transport material, and time. The Madriu-Perafita-Claror, InterAmbAr and PCR Mont Lozère high mountain projects have documented hundreds of archaeological structures and carried out many archaeological excavations. These projects required the development of a technique which could record both structures and the process of an archaeological excavation in a fast and reliable manner. The combination of DGPS, close-range terrestrial stereophotogrammetry and voxel based GIS modelling offered a perfect solution since it helped in developing a strategy which would obtain all the required data on-site fast and with a high degree of precision. These data are treated off-site to obtain georeferenced orthoimages covering both the structures and the excavation process from which site and excavation plans can be created. The proposed workflow outputs also include digital surface models and volumetric models of the excavated areas from which topography and archaeological profiles were obtained by voxel-based GIS procedures. In this way, all the graphic recording required by standard archaeological practices was met.

  18. Retrospective evaluation of dosimetric quality for prostate carcinomas treated with 3D conformal, intensity modulated and volumetric modulated arc radiotherapy

    SciTech Connect

    Crowe, Scott B; Kairn, Tanya; Middlebrook, Nigel; Hill, Brendan; Christie, David R H; Knight, Richard T; Kenny, John; Langton, Christian M; Trapp, Jamie V

    2013-12-15

    This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across five centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity modulated radiotherapy (IMRT) and 47 treated with volumetric modulated arc therapy (VMAT). Treatment plan quality was evaluated in terms of target dose homogeneity and organs at risk (OAR), through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each OAR. Statistical significance was evaluated using two-tailed Welch's T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the OAR: with increased compliance with recommended OAR dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.

  19. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    PubMed

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  20. Estimation of the degree of polarization in low-light 3D integral imaging

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2016-06-01

    The calculation of the Stokes Parameters and the Degree of Polarization in 3D integral images requires a careful manipulation of the polarimetric elemental images. This fact is particularly important if the scenes are taken in low-light conditions. In this paper, we show that the Degree of Polarization can be effectively estimated even when elemental images are recorded with few photons. The original idea was communicated in [A. Carnicer and B. Javidi, "Polarimetric 3D integral imaging in photon-starved conditions," Opt. Express 23, 6408-6417 (2015)]. First, we use the Maximum Likelihood Estimation approach for generating the 3D integral image. Nevertheless, this method produces very noisy images and thus, the degree of polarization cannot be calculated. We suggest using a Total Variation Denoising filter as a way to improve the quality of the generated 3D images. As a result, noise is suppressed but high frequency information is preserved. Finally, the degree of polarization is obtained successfully.

  1. Advances in Image Pre-Processing to Improve Automated 3d Reconstruction

    NASA Astrophysics Data System (ADS)

    Ballabeni, A.; Apollonio, F. I.; Gaiani, M.; Remondino, F.

    2015-02-01

    Tools and algorithms for automated image processing and 3D reconstruction purposes have become more and more available, giving the possibility to process any dataset of unoriented and markerless images. Typically, dense 3D point clouds (or texture 3D polygonal models) are produced at reasonable processing time. In this paper, we evaluate how the radiometric pre-processing of image datasets (particularly in RAW format) can help in improving the performances of state-of-the-art automated image processing tools. Beside a review of common pre-processing methods, an efficient pipeline based on color enhancement, image denoising, RGB to Gray conversion and image content enrichment is presented. The performed tests, partly reported for sake of space, demonstrate how an effective image pre-processing, which considers the entire dataset in analysis, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in case of poor texture scenarios.

  2. Computer-generated hologram for 3D scene from multi-view images

    NASA Astrophysics Data System (ADS)

    Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong

    2013-05-01

    Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.

  3. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  4. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  5. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  6. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  7. Multiresolution 3-D reconstruction from side-scan sonar images.

    PubMed

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  8. Contactless operating table control based on 3D image processing.

    PubMed

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  9. Contactless operating table control based on 3D image processing.

    PubMed

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct. PMID:25569978

  10. Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2006-05-01

    This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.

  11. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    PubMed Central

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  12. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.

  13. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  14. Effect of anatomical backgrounds on detectability in volumetric cone beam CT images

    NASA Astrophysics Data System (ADS)

    Han, Minah; Park, Subok; Baek, Jongduk

    2016-03-01

    As anatomical noise is often a dominating factor affecting signal detection in medical imaging, we investigate the effects of anatomical backgrounds on signal detection in volumetric cone beam CT images. Signal detection performances are compared between transverse and longitudinal planes with either uniform or anatomical backgrounds. Sphere objects with diameters of 1mm, 5mm, 8mm, and 11mm are used as the signals. Three-dimensional (3D) anatomical backgrounds are generated using an anatomical noise power spectrum, 1/fβ, with β=3, equivalent to mammographic background [1]. The mean voxel value of the 3D anatomical backgrounds is used as an attenuation coefficient of the uniform background. Noisy projection data are acquired by the forward projection of the uniform and anatomical 3D backgrounds with/without sphere lesions and by the addition of quantum noise. Then, images are reconstructed by an FDK algorithm [2]. For each signal size, signal detection performances in transverse and longitudinal planes are measured by calculating the task SNR of a channelized Hotelling observer with Laguerre-Gauss channels. In the uniform background case, transverse planes yield higher task SNR values for all sphere diameters but 1mm. In the anatomical background case, longitudinal planes yield higher task SNR values for all signal diameters. The results indicate that it is beneficial to use longitudinal planes to detect spherical signals in anatomical backgrounds.

  15. Space Radar Image Isla Isabela in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data

  16. Radar Imaging of Spheres in 3D using MUSIC

    SciTech Connect

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  17. Optimized Volumetric Modulated Arc Therapy Versus 3D-CRT for Early Stage Mediastinal Hodgkin Lymphoma Without Axillary Involvement: A Comparison of Second Cancers and Heart Disease Risk

    SciTech Connect

    Filippi, Andrea Riccardo; Ragona, Riccardo; Piva, Cristina; Scafa, Davide; Fiandra, Christian; Fusella, Marco; Giglioli, Francesca Romana; Lohr, Frank; Ricardi, Umberto

    2015-05-01

    Purpose: The purpose of this study was to evaluate the risks of second cancers and cardiovascular diseases associated with an optimized volumetric modulated arc therapy (VMAT) planning solution in a selected cohort of stage I/II Hodgkin lymphoma (HL) patients treated with either involved-node or involved-site radiation therapy in comparison with 3-dimensional conformal radiation therapy (3D-CRT). Methods and Materials: Thirty-eight patients (13 males and 25 females) were included. Disease extent was mediastinum alone (n=8, 21.1%); mediastinum plus unilateral neck (n=19, 50%); mediastinum plus bilateral neck (n=11, 29.9%). Prescription dose was 30 Gy in 2-Gy fractions. Only 5 patients had mediastinal bulky disease at diagnosis (13.1%). Anteroposterior 3D-CRT was compared with a multiarc optimized VMAT solution. Lung, breast, and thyroid cancer risks were estimated by calculating a lifetime attributable risk (LAR), with a LAR ratio (LAR{sub VMAT}-to-LAR{sub 3D-CRT}) as a comparative measure. Cardiac toxicity risks were estimated by calculating absolute excess risk (AER). Results: The LAR ratio favored 3D-CRT for lung cancer induction risk in mediastinal alone (P=.004) and mediastinal plus unilateral neck (P=.02) presentations. LAR ratio for breast cancer was lower for VMAT in mediastinal plus bilateral neck presentations (P=.02), without differences for other sites. For thyroid cancer, no significant differences were observed, regardless of anatomical presentation. A significantly lower AER of cardiac (P=.038) and valvular diseases (P<.0001) was observed for VMAT regardless of disease extent. Conclusions: In a cohort of patients with favorable characteristics in terms of disease extent at diagnosis (large prevalence of nonbulky presentations without axillary involvement), optimized VMAT reduced heart disease risk with comparable risks of thyroid and breast cancer, with an increase in lung cancer induction probability. The results are however strongly influenced by

  18. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  19. 3-D capacitance density imaging of fluidized bed

    DOEpatents

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  20. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate

    PubMed Central

    Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-01-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values. PMID:26693303

  1. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  2. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  3. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    PubMed Central

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  4. A featureless approach to 3D polyhedral building modeling from aerial images.

    PubMed

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  5. Boundary estimation method for ultrasonic 3D imaging

    NASA Astrophysics Data System (ADS)

    Ohashi, Gosuke; Ohya, Akihisa; Natori, Michiya; Nakajima, Masato

    1993-09-01

    The authors developed a new method for automatically and efficiently estimating the boundaries of soft tissue and amniotic fluid and to obtain a fine three dimensional image of the fetus from information given by ultrasonic echo images. The aim of this boundary estimation is to provide clear three dimensional images by shading the surface of the fetus and uterine wall using Lambert shading method. Normally there appears a random granular pattern called 'speckle' on an ultrasonic echo image. Therefore, it is difficult to estimate the soft tissue boundary satisfactorily via a simple method such as threshold value processing. Accordingly, the authors devised a method for classifying attributes into three categories using the neural network: soft tissue, amniotic and boundary. The shape of the grey level histogram was the standard for judgment, made by referring to the peripheral region of the voxel. Its application to the clinical data has shown a fine estimation of the boundary between the fetus or the uterine wall and the amniotic, enabling the details of the three dimensional structure to be observed.

  6. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  7. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  8. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  9. Synthesis of image sequences for Korean sign language using 3D shape model

    NASA Astrophysics Data System (ADS)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  10. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere. PMID:17030540

  11. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    PubMed Central

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-01-01

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements. PMID:27657066

  12. Space Radar Image of Mammoth, California in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective of Mammoth Mountain, California. This view was constructed by overlaying a Spaceborne Imaging Radar-C (SIR-C) radar image on a U.S. Geological Survey digital elevation map. Vertical exaggeration is 1.87 times. The image is centered at 37.6 degrees north, 119.0 degrees west. It was acquired from the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on its 67th orbit on April 13, 1994. In this color representation, red is C-band HV-polarization, green is C-band VV-polarization and blue is the ratio of C-band VV to C-band HV. Blue areas are smooth, and yellow areas are rock out-crops with varying amounts of snow and vegetation. Crowley Lake is in the foreground, and Highway 395 crosses in the middle of the image. Mammoth Mountain is shown in the upper right. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

  13. Space Radar Image of Missoula, Montana in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA

  14. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  15. Space Radar Image of Long Valley, California in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are

  16. Space Radar Image of Karakax Valley, China 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted

  17. How Accurate Are the Fusion of Cone-Beam CT and 3-D Stereophotographic Images?

    PubMed Central

    Jayaratne, Yasas S. N.; McGrath, Colman P. J.; Zwahlen, Roger A.

    2012-01-01

    Background Cone-beam Computed Tomography (CBCT) and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D) visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1) to evaluate the feasibility of integrating 3-D Photos and CBCT images 2) to assess degree of error that may occur during the above processes and 3) to identify facial regions that would be most appropriate for 3-D image registration. Methodology CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS) error. Principal Findings: The signed average and RMS of the distance differences between the registered surfaces were −0.018 (±0.129) mm and 0.739 (±0.239) mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. Conclusions CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning. PMID:23185372

  18. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  19. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    NASA Astrophysics Data System (ADS)

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  20. Snapshot 3D optical coherence tomography system using image mappingspectrometry

    PubMed Central

    Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

    2013-01-01

    A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,λ) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 μm depth and 13.4μm in transverse resolution. Axial resolution of 16.0μm can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

  1. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  2. 3D/2D convertible projection-type integral imaging using concave half mirror array.

    PubMed

    Hong, Jisoo; Kim, Youngmin; Park, Soon-gi; Hong, Jong-Ho; Min, Sung-Wook; Lee, Sin-Doo; Lee, Byoungho

    2010-09-27

    We propose a new method for implementing 3D/2D convertible feature in the projection-type integral imaging by using concave half mirror array. The concave half mirror array has the partially reflective characteristic to the incident light. And the reflected term is modulated by the concave mirror array structure, while the transmitted term is unaffected. With such unique characteristic, 3D/2D conversion or even the simultaneous display of 3D and 2D images is also possible. The prototype was fabricated by the aluminum coating and the polydimethylsiloxane molding process. We could experimentally verify the 3D/2D conversion and the display of 3D image on 2D background with the fabricated prototype.

  3. Quantification of the aortic arch morphology in 3D CTA images for endovascular aortic repair (EVAR)

    NASA Astrophysics Data System (ADS)

    Wörz, S.; von Tengg-Kobligk, H.; Henninger, V.; Böckler, D.; Kauczor, H.-U.; Rohr, K.

    2008-03-01

    We introduce a new model-based approach for the segmentation and quantification of the aortic arch morphology in 3D CTA images for endovascular aortic repair (EVAR). The approach is based on a 3D analytic intensity model for thick vessels, which is directly fitted to the image. Based on the fitting results we compute the (local) 3D vessel curvature and torsion as well as the relevant lengths not only along the 3D centerline but particularly along the inner and outer contour. These measurements are important for pre-operative planning in EVAR applications. We have successfully applied our approach using ten 3D CTA images and have compared the results with ground truth obtained by a radiologist. It turned out that our approach yields accurate estimation results. We have also performed a comparison with a commercial vascular analysis software.

  4. Automated 3D whole-breast ultrasound imaging: results of a clinical pilot study

    NASA Astrophysics Data System (ADS)

    Leproux, Anaïs; van Beek, Michiel; de Vries, Ute; Wasser, Martin; Bakker, Leon; Cuisenaire, Olivier; van der Mark, Martin; Entrekin, Rob

    2010-03-01

    We present the first clinical results of a novel fully automated 3D breast ultrasound system. This system was designed to match a Philips diffuse optical mammography system to enable straightforward coregistration of optical and ultrasound images. During a measurement, three 3D transducers scan the breast at 4 different views. The resulting 12 datasets are registered together into a single volume using spatial compounding. In a pilot study, benign and malignant masses could be identified in the 3D images, however lesion visibility is less compared to conventional breast ultrasound. Clear breast shape visualization suggests that ultrasound could support the reconstruction and interpretation of diffuse optical tomography images.

  5. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  6. Astigmatic multifocus microscopy enables deep 3D super-resolved imaging

    PubMed Central

    Oudjedi, Laura; Fiche, Jean-Bernard; Abrahamsson, Sara; Mazenq, Laurent; Lecestre, Aurélie; Calmon, Pierre-François; Cerf, Aline; Nöllmann, Marcelo

    2016-01-01

    We have developed a 3D super-resolution microscopy method that enables deep imaging in cells. This technique relies on the effective combination of multifocus microscopy and astigmatic 3D single-molecule localization microscopy. We describe the optical system and the fabrication process of its key element, the multifocus grating. Then, two strategies for localizing emitters with our imaging method are presented and compared with a previously described deep 3D localization algorithm. Finally, we demonstrate the performance of the method by imaging the nuclear envelope of eukaryotic cells reaching a depth of field of ~4µm. PMID:27375935

  7. Astigmatic multifocus microscopy enables deep 3D super-resolved imaging.

    PubMed

    Oudjedi, Laura; Fiche, Jean-Bernard; Abrahamsson, Sara; Mazenq, Laurent; Lecestre, Aurélie; Calmon, Pierre-François; Cerf, Aline; Nöllmann, Marcelo

    2016-06-01

    We have developed a 3D super-resolution microscopy method that enables deep imaging in cells. This technique relies on the effective combination of multifocus microscopy and astigmatic 3D single-molecule localization microscopy. We describe the optical system and the fabrication process of its key element, the multifocus grating. Then, two strategies for localizing emitters with our imaging method are presented and compared with a previously described deep 3D localization algorithm. Finally, we demonstrate the performance of the method by imaging the nuclear envelope of eukaryotic cells reaching a depth of field of ~4µm.

  8. Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher.

    PubMed

    Wang, Qiong-Hua; Ji, Chao-Chao; Li, Lei; Deng, Huan

    2016-01-11

    In this paper, a dual-view integral imaging three-dimensional (3D) display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array is proposed. Two elemental image arrays for two different 3D images are presented by the display panel alternately, and the polarization switcher controls the polarization direction of the light rays synchronously. The two elemental image arrays are modulated by their corresponding and neighboring micro-lenses of the micro-lens array, and reconstruct two different 3D images in viewing zones 1 and 2, respectively. A prototype of the dual-view II 3D display is developed, and it has good performances.

  9. Dual wavelength digital holography for 3D particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Grare, S.; Coëtmellec, S.,; Allano, D.; Grehan, G.; Brunel, M.; Lebrun, D.

    2015-02-01

    A multi-exposure digital in-line hologram of a moving particle field is recorded by two different wavelengths and at different times. As a result, during the reconstruction step, each hologram can be independently and accurately reconstructed for each wavelength. This procedure enables avoiding the superimposition of particles images that may be close to each other in multi-exposure holography. The feasibility is demonstrated by using a standard particle sizing reticle and shows the potential of this method for particle velocity measurement.

  10. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  11. 3D Synchrotron Imaging of a Directionally Solidified Ternary Eutectic

    NASA Astrophysics Data System (ADS)

    Dennstedt, Anne; Helfen, Lukas; Steinmetz, Philipp; Nestler, Britta; Ratke, Lorenz

    2016-03-01

    For the first time, the microstructure of directionally solidified ternary eutectics is visualized in three dimensions, using a high-resolution technique of X-ray tomography at the ESRF. The microstructure characterization is conducted with a photon energy, allowing to clearly discriminate the three phases Ag2Al, Al2Cu, and α-Aluminum solid solution. The reconstructed images illustrate the three-dimensional arrangement of the phases. The Ag2Al lamellae perform splitting and merging as well as nucleation and disappearing events during directional solidification.

  12. 3D CARS image reconstruction and pattern recognition on SHG images

    NASA Astrophysics Data System (ADS)

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen