Science.gov

Sample records for 3d volumetric imaging

  1. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  2. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  3. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  4. Registering preprocedure volumetric images with intraprocedure 3-D ultrasound using an ultrasound imaging model.

    PubMed

    King, A P; Rhode, K S; Ma, Y; Yao, C; Jansen, C; Razavi, R; Penney, G P

    2010-03-01

    For many image-guided interventions there exists a need to compute the registration between preprocedure image(s) and the physical space of the intervention. Real-time intraprocedure imaging such as ultrasound (US) can be used to image the region of interest directly and provide valuable anatomical information for computing this registration. Unfortunately, real-time US images often have poor signal-to-noise ratio and suffer from imaging artefacts. Therefore, registration using US images can be challenging and significant preprocessing is often required to make the registrations robust. In this paper we present a novel technique for computing the image-to-physical registration for minimally invasive cardiac interventions using 3-D US. Our technique uses knowledge of the physics of the US imaging process to reduce the amount of preprocessing required on the 3-D US images. To account for the fact that clinical US images normally undergo significant image processing before being exported from the US machine our optimization scheme allows the parameters of the US imaging model to vary. We validated our technique by computing rigid registrations for 12 cardiac US/magnetic resonance imaging (MRI) datasets acquired from six volunteers and two patients. The technique had mean registration errors of 2.1-4.4 mm, and 75% capture ranges of 5-30 mm. We also demonstrate how the same approach can be used for respiratory motion correction: on 15 datasets acquired from five volunteers the registration errors due to respiratory motion were reduced by 45%-92%.

  5. 3D texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Khandelwal, Niranjan

    2013-02-01

    In this paper we have investigated a new approach for texture features extraction using co-occurrence matrix from volumetric lung CT image. Traditionally texture analysis is performed in 2D and is suitable for images collected from 2D imaging modality. The use of 3D imaging modalities provide the scope of texture analysis from 3D object and 3D texture feature are more realistic to represent 3D object. In this work, Haralick's texture features are extended in 3D and computed from volumetric data considering 26 neighbors. The optimal texture features to characterize the internal structure of Solitary Pulmonary Nodules (SPN) are selected based on area under curve (AUC) values of ROC curve and p values from 2-tailed Student's t-test. The selected texture feature in 3D to represent SPN can be used in efficient Computer Aided Diagnostic (CAD) design plays an important role in fast and accurate lung cancer screening. The reduced number of input features to the CAD system will decrease the computational time and classification errors caused by irrelevant features. In the present work, SPN are classified from Ground Glass Nodule (GGN) using Artificial Neural Network (ANN) classifier considering top five 3D texture features and top five 2D texture features separately. The classification is performed on 92 SPN and 25 GGN from Imaging Database Resources Initiative (IDRI) public database and classification accuracy using 3D texture features and 2D texture features provide 97.17% and 89.1% respectively.

  6. Volumetric medical image compression using 3D listless embedded block partitioning.

    PubMed

    Senapati, Ranjan K; Prasad, P M K; Swain, Gandharba; Shankar, T N

    2016-01-01

    This paper presents a listless variant of a modified three-dimensional (3D)-block coding algorithm suitable for medical image compression. A higher degree of correlation is achieved by using a 3D hybrid transform. The 3D hybrid transform is performed by a wavelet transform in the spatial dimension and a Karhunen-Loueve transform in the spectral dimension. The 3D transformed coefficients are arranged in a one-dimensional (1D) fashion, as in the hierarchical nature of the wavelet-coefficient distribution strategy. A novel listless block coding algorithm is applied to the mapped 1D coefficients which encode in an ordered-bit-plane fashion. The algorithm originates from the most significant bit plane and terminates at the least significant bit plane to generate an embedded bit stream, as in 3D-SPIHT. The proposed algorithm is called 3D hierarchical listless block (3D-HLCK), which exhibits better compression performance than that exhibited by 3D-SPIHT. Further, it is highly competitive with some of the state-of-the-art 3D wavelet coders for a wide range of bit rates for magnetic resonance, digital imaging and communication in medicine and angiogram images. 3D-HLCK provides rate and resolution scalability similar to those provided by 3D-SPIHT and 3D-SPECK. In addition, a significant memory reduction is achieved owing to the listless nature of 3D-HLCK.

  7. Supervised recursive segmentation of volumetric CT images for 3D reconstruction of lung and vessel tree.

    PubMed

    Li, Xuanping; Wang, Xue; Dai, Yixiang; Zhang, Pengbo

    2015-12-01

    Three dimensional reconstruction of lung and vessel tree has great significance to 3D observation and quantitative analysis for lung diseases. This paper presents non-sheltered 3D models of lung and vessel tree based on a supervised semi-3D lung tissues segmentation method. A recursive strategy based on geometric active contour is proposed instead of the "coarse-to-fine" framework in existing literature to extract lung tissues from the volumetric CT slices. In this model, the segmentation of the current slice is supervised by the result of the previous one slice due to the slight changes between adjacent slice of lung tissues. Through this mechanism, lung tissues in all the slices are segmented fast and accurately. The serious problems of left and right lungs fusion, caused by partial volume effects, and segmentation of pleural nodules can be settled meanwhile during the semi-3D process. The proposed scheme is evaluated by fifteen scans, from eight healthy participants and seven participants suffering from early-stage lung tumors. The results validate the good performance of the proposed method compared with the "coarse-to-fine" framework. The segmented datasets are utilized to reconstruct the non-sheltered 3D models of lung and vessel tree.

  8. A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment

    NASA Astrophysics Data System (ADS)

    Casteel, Curtis H., Jr.; Gorham, LeRoy A.; Minardi, Michael J.; Scarborough, Steven M.; Naidu, Kiranmai D.; Majumder, Uttam K.

    2007-04-01

    This paper describes a challenge problem whose scope is the 2D/3D imaging of stationary targets from a volumetric data set of X-band Synthetic Aperture Radar (SAR) data collected in an urban environment. The data for this problem was collected at a scene consisting of numerous civilian vehicles and calibration targets. The radar operated in circular SAR mode and completed 8 circular flight paths around the scene with varying altitudes. Data consists of phase history data, auxiliary data, processing algorithms, processed images, as well as ground truth data. Interest is focused on mitigating the large side lobes in the point spread function. Due to the sparse nature of the elevation aperture, traditional imaging techniques introduce excessive artifacts in the processed images. Further interests include the formation of highresolution 3D SAR images with single pass data and feature extraction for 3D SAR automatic target recognition applications. The purpose of releasing the Gotcha Volumetric SAR Data Set is to provide the community with X-band SAR data that supports the development of new algorithms for high-resolution 2D/3D imaging.

  9. Constrained reverse diffusion for thick slice interpolation of 3D volumetric MRI images.

    PubMed

    Neubert, Aleš; Salvado, Olivier; Acosta, Oscar; Bourgeat, Pierrick; Fripp, Jurgen

    2012-03-01

    Due to physical limitations inherent in magnetic resonance imaging scanners, three dimensional volumetric scans are often acquired with anisotropic voxel resolution. We investigate several interpolation approaches to reduce the anisotropy and present a novel approach - constrained reverse diffusion for thick slice interpolation. This technique was compared to common methods: linear and cubic B-Spline interpolation and a technique based on non-rigid registration of neighboring slices. The methods were evaluated on artificial MR phantoms and real MR scans of human brain. The constrained reverse diffusion approach delivered promising results and provides an alternative for thick slice interpolation, especially for higher anisotropy factors.

  10. Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images

    NASA Astrophysics Data System (ADS)

    Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim

    2014-04-01

    Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.

  11. Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Alam, Naved; Khandelwal, Niranjan

    2013-02-01

    In this paper a differential geometry based method is proposed for calculating surface speculation of solitary pulmonary nodule (SPN) in 3D from lung CT images. Spiculation present in SPN is an important shape feature to assist radiologist for measurement of malignancy. Performance of Computer Aided Diagnostic (CAD) system depends on the accurate estimation of feature like spiculation. In the proposed method, the peak of the spicules is identified using the property of Gaussian and mean curvature calculated at each surface point on segmented SPN. Once the peak point for a particular SPN is identified, the nearest valley points for the corresponding peak point are determined. The area of cross-section of the best fitted plane passing through the valley points is the base of that spicule. The solid angle subtended by the base of spicule at peak point and the distance of peak point from nodule base are taken as the measures of spiculation. The speculation index (SI) for a particular SPN is the weighted combination of all the spicules present in that SPN. The proposed method is validated on 95 SPN from Imaging Database Resources Initiative (IDRI) public database. It has achieved 87.4% accuracy in calculating quantified spiculation index compared to the spiculation index provided by radiologists in IDRI database.

  12. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  13. Volumetric label-free imaging and 3D reconstruction of mammalian cochlea based on two-photon excitation fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Xianzeng; Geng, Yang; Ye, Qing; Zhan, Zhenlin; Xie, Shusen

    2013-11-01

    The visualization of the delicate structure and spatial relationship of intracochlear sensory cells has relied on the laborious procedures of tissue excision, fixation, sectioning and staining for light and electron microscopy. Confocal microscopy is advantageous for its high resolution and deep penetration depth, yet disadvantageous due to the necessity of exogenous labeling. In this study, we present the volumetric imaging of rat cochlea without exogenous dyes using a near-infrared femtosecond laser as the excitation mechanism and endogenous two-photon excitation fluorescence (TPEF) as the contrast mechanism. We find that TPEF exhibits strong contrast, allowing cellular and even subcellular resolution imaging of the cochlea, differentiating cell types, visualizing delicate structures and the radial nerve fiber. Our results further demonstrate that 3D reconstruction rendered with z-stacks of optical sections enables better revealment of fine structures and spatial relationships, and easily performed morphometric analysis. The TPEF-based optical biopsy technique provides great potential for new and sensitive diagnostic tools for hearing loss or hearing disorders, especially when combined with fiber-based microendoscopy.

  14. Protocol for volumetric segmentation of medial temporal structures using high-resolution 3-D magnetic resonance imaging.

    PubMed

    Bonilha, Leonardo; Kobayashi, Eliane; Cendes, Fernando; Min Li, Li

    2004-06-01

    Quantitative analysis of brain structures in normal subjects and in different neurological conditions can be carried out in vivo through magnetic resonance imaging (MRI) volumetric studies. The use of high-resolution MRI combined with image post-processing that allows simultaneous multiplanar view may facilitate volumetric segmentation of temporal lobe structures. We define a protocol for volumetric studies of medial temporal lobe structures using high-resolution MR images and we studied 30 healthy subjects (19 women; mean age, 33 years; age range, 21-55 years). Images underwent field non-homogeneity correction and linear stereotaxic transformation into a standard space. Structures of interest comprised temporopolar, entorhinal, perirhinal, parahippocampal cortices, hippocampus, and the amygdala. Segmentation was carried out with multiplanar assessment. There was no statistically significant left/right-sided asymmetry concerning any structure analyzed. Neither gender nor age influenced the volumes obtained. The coefficient of repeatability showed no significant difference of intra- and interobserver measurements. Imaging post-processing and simultaneous multiplanar view of high-resolution MRI facilitates volumetric assessment of the medial portion of the temporal lobe with strict adherence to anatomic landmarks. This protocol shows no significant inter- and intraobserver variations and thus is reliable for longitudinal studies.

  15. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  16. A novel skeleton based quantification and 3-D volumetric visualization of left atrium fibrosis using late gadolinium enhancement magnetic resonance imaging.

    PubMed

    Ravanelli, Daniele; dal Piaz, Elena Costanza; Centonze, Maurizio; Casagranda, Giulia; Marini, Massimiliano; Del Greco, Maurizio; Karim, Rashed; Rhode, Kawal; Valentini, Aldo

    2014-02-01

    This work presents the results of a new tool for 3-D segmentation, quantification and visualization of cardiac left atrium fibrosis, based on late gadolinium enhancement magnetic resonance imaging (LGE-MRI), for stratifying patients with atrial fibrillation (AF) that are candidates for radio-frequency catheter ablation. In this study 10 consecutive patients suffering AF with different grades of atrial fibrosis were considered. LGE-MRI and magnetic resonance angiography (MRA) images were used to detect and quantify fibrosis of the left atrium using a threshold and 2-D skeleton based approach. Quantification and 3-D volumetric views of atrial fibrosis were compared with quantification and 3-D bipolar voltage maps measured with an electro-anatomical mapping (EAM) system, the clinical reference standard technique for atrial substrate characterization. Segmentation and quantification of fibrosis areas proved to be clinically reliable among all different fibrosis stages. The proposed tool obtains discrepancies in fibrosis quantification less than 4% from EAM results and yields accurate 3-D volumetric views of fibrosis of left atrium. The novel 3-D visualization and quantification tool based on LGE-MRI allows detection of cardiac left atrium fibrosis areas. This noninvasive method provides a clinical alternative to EAM systems for quantification and localization of atrial fibrosis.

  17. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire

    NASA Astrophysics Data System (ADS)

    Poon, Kelvin; Hamarneh, Ghassan; Abugharbieh, Rafeef

    2007-03-01

    Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic methods are typically preferred, their success is often hindered by poor image quality and significant variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based segmentation approach with new features designed to primarily enable the handling of complex object topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D Livewire segmentation on every slice.

  18. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  19. Alpha shape theory for 3D visualization and volumetric measurement of brain tumor progression using magnetic resonance images.

    PubMed

    Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi

    2015-07-01

    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor.

  20. Volumetric visualization of 3D data

    NASA Technical Reports Server (NTRS)

    Russell, Gregory; Miles, Richard

    1989-01-01

    In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.

  1. MO-DE-210-06: Development of a Supercompounded 3D Volumetric Ultrasound Image Guidance System for Prone Accelerated Partial Breast Irradiation (APBI)

    SciTech Connect

    Chiu, T; Hrycushko, B; Zhao, B; Jiang, S; Gu, X

    2015-06-15

    Purpose: For early-stage breast cancer, accelerated partial breast irradiation (APBI) is a cost-effective breast-conserving treatment. Irradiation in a prone position can mitigate respiratory induced breast movement and achieve maximal sparing of heart and lung tissues. However, accurate dose delivery is challenging due to breast deformation and lumpectomy cavity shrinkage. We propose a 3D volumetric ultrasound (US) image guidance system for accurate prone APBI Methods: The designed system, set beneath the prone breast board, consists of a water container, an US scanner, and a two-layer breast immobilization cup. The outer layer of the breast cup forms the inner wall of water container while the inner layer is attached to patient breast directly to immobilization. The US transducer scans is attached to the outer-layer of breast cup at the dent of water container. Rotational US scans in a transverse plane are achieved by simultaneously rotating water container and transducer, and multiple transverse scanning forms a 3D scan. A supercompounding-technique-based volumetric US reconstruction algorithm is developed for 3D image reconstruction. The performance of the designed system is evaluated with two custom-made gelatin phantoms containing several cylindrical inserts filled in with water (11% reflection coefficient between materials). One phantom is designed for positioning evaluation while the other is for scaling assessment. Results: In the positioning evaluation phantom, the central distances between the inserts are 15, 20, 30 and 40 mm. The distances on reconstructed images differ by −0.19, −0.65, −0.11 and −1.67 mm, respectively. In the scaling evaluation phantom, inserts are 12.7, 19.05, 25.40 and 31.75 mm in diameter. Measured inserts’ sizes on images differed by 0.23, 0.19, −0.1 and 0.22 mm, respectively. Conclusion: The phantom evaluation results show that the developed 3D volumetric US system can accurately localize target position and determine

  2. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  3. 3D volumetric radar using 94-GHz millimeter waves

    NASA Astrophysics Data System (ADS)

    Takács, Barnabás

    2006-05-01

    This article describes a novel approach to the real-time visualization of 3D imagery obtained from a 3D millimeter wave scanning radar. The MMW radar system employs a spinning antenna to generate a fan-shaped scanning pattern of the entire scene. The beams formed this way provide all weather 3D distance measurements (range/azimuth display) of objects as they appear on the ground. The beam width of the antenna and its side lobes are optimized to produce the best possible resolution even at distances of up to 15 Kms. To create a full 3D data set the fan-pattern is tilted up and down with the help of a controlled stepper motor. For our experiments we collected data at 0.1 degrees increments while using both bi-static as well as a mono-static antennas in our arrangement. The data collected formed a stack of range-azimuth images in the shape of a cone. This information is displayed using our high-end 3D visualization engine capable of displaying high-resolution volumetric models with 30 frames per second. The resulting 3D scenes can then be viewed from any angle and subsequently processed to integrate, fuse or match them against real-life sensor imagery or 3D model data stored in a synthetic database.

  4. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    SciTech Connect

    Mishra, Pankaj Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.; Li, Ruijiang

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  5. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  6. Multi-sensor 3D volumetric reconstruction using CUDA

    NASA Astrophysics Data System (ADS)

    Aliakbarpour, Hadi; Almeida, Luis; Menezes, Paulo; Dias, Jorge

    2011-12-01

    This paper presents a full-body volumetric reconstruction of a person in a scene using a sensor network, where some of them can be mobile. The sensor network is comprised of couples of camera and inertial sensor (IS). Taking advantage of IS, the 3D reconstruction is performed using no planar ground assumption. Moreover, IS in each couple is used to define a virtual camera whose image plane is horizontal and aligned with the earth cardinal directions. The IS is furthermore used to define a set of inertial planes in the scene. The image plane of each virtual camera is projected onto this set of parallel-horizontal inertial-planes, using some adapted homography functions. A parallel processing architecture is proposed in order to perform human real-time volumetric reconstruction. The real-time characteristic is obtained by implementing the reconstruction algorithm on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA). In order to show the effectiveness of the proposed algorithm, a variety of the gestures of a person acting in the scene is reconstructed and demonstrated. Some analyses have been carried out to measure the performance of the algorithm in terms of processing time. The proposed framework has potential to be used by different applications such as smart-room, human behavior analysis and 3D teleconference. [Figure not available: see fulltext.

  7. Slow Growing Volumetric Subdivision for 3D Volumetric Data

    SciTech Connect

    Pascucci, V; Kahn, S; Kelley, R; Kilbourne, C; Porter, F; Wargelin, B

    2004-12-16

    In recent years subdivision methods have been successfully applied to the multi-resolution representation and compression of surface meshes. Unfortunately their use in the volumetric case has remained impractical because of the use of tensor-product generalizations that induce an excessive growth of the mesh size before sufficient number is preformed. This technical sketch presents a new subdivision technique that refines volumetric (and higher-dimensional) meshes at the same rate of surface meshes. The scheme builds adaptive refinements of a mesh without using special decompositions of the cells connecting different levels of resolution. Lower dimensional ''sharp'' features are also handled directly in a natural way. The averaging rules allow to reproduce the same smoothness of the two best known previous tensor product refinement methods.

  8. Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.

    PubMed

    Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio

    2014-07-05

    A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems.

  9. Fast Volumetric Spatial-Spectral MR Imaging of Hyperpolarized 13C-Labeled Compounds using Multiple Echo 3D bSSFP

    PubMed Central

    Perman, William H.; Bhattacharya, Pratip; Leupold, Jochen; Lin, Alexander P.; Harris, Kent C.; Norton, Valerie A.; Hovener, Jan B.; Ross, Brian D.

    2010-01-01

    PURPOSE The goal of this work was to develop a fast 3D chemical shift imaging technique for the non-invasive measurement of hyperpolarized 13C-labeled substrates and metabolic products at low concentration. MATERIALS AND METHODS Multiple echo 3D balanced steady state MR imaging (ME-3DbSSFP) was performed in vitro on a syringe containing hyperpolarized [1,3,3-2H3; 1-13C]2-hydroxyethylpropionate (HEP) adjacent to a 13C-enriched acetate phantom, and in vivo on a rat before and after IV injection of hyperpolarized HEP at 1.5 T. Chemical shift images of the hyperpolarized HEP were derived from the multiple echo data by Fourier transformation along the echoes on a voxel by voxel basis for each slice of the 3D data set. RESULTS ME-3DbSSFP imaging was able to provide chemical shift images of hyperpolarized HEP in vivo, and in a rat with isotropic 7 mm spatial resolution, 93 Hz spectral resolution and 16 second temporal resolution for a period greater than 45 seconds. CONCLUSION Multiple echo 3D bSSFP imaging can provide chemical shift images of hyperpolarized 13C-labeled compounds in vivo with relatively high spatial resolution and moderate spectral resolution. The increased signal-to-noise ratio (SNR) of this 3D technique will enable the detection of hyperpolarized 13C-labeled metabolites at lower concentrations as compared to a 2D technique. PMID:20171034

  10. Comparison between 3D volumetric rendering and multiplanar slices on the reliability of linear measurements on CBCT images: an in vitro study

    PubMed Central

    FERNANDES, Thais Maria Freire; ADAMCZYK, Julie; POLETI, Marcelo Lupion; HENRIQUES, José Fernando Castanha; FRIEDLAND, Bernard; GARIB, Daniela Gamba

    2015-01-01

    Objective The purpose of this study was to determine the accuracy and reliability of two methods of measurements of linear distances (multiplanar 2D and tridimensional reconstruction 3D) obtained from cone-beam computed tomography (CBCT) with different voxel sizes. Material and Methods Ten dry human mandibles were scanned at voxel sizes of 0.2 and 0.4 mm. Craniometric anatomical landmarks were identified twice by two independent operators on the multiplanar reconstructed and on volume rendering images that were generated by the software Dolphin®. Subsequently, physical measurements were performed using a digital caliper. Analysis of variance (ANOVA), intraclass correlation coefficient (ICC) and Bland-Altman were used for evaluating accuracy and reliability (p<0.05). Results Excellent intraobserver reliability and good to high precision interobserver reliability values were found for linear measurements from CBCT 3D and multiplanar images. Measurements performed on multiplanar reconstructed images were more accurate than measurements in volume rendering compared with the gold standard. No statistically significant difference was found between voxel protocols, independently of the measurement method. Conclusions Linear measurements on multiplanar images of 0.2 and 0.4 voxel are reliable and accurate when compared with direct caliper measurements. Caution should be taken in the volume rendering measurements, because the measurements were reliable, but not accurate for all variables. An increased voxel resolution did not result in greater accuracy of mandible measurements and would potentially provide increased patient radiation exposure. PMID:25004053

  11. Data acquirement and remodeling on volumetric 3D emissive display system

    NASA Astrophysics Data System (ADS)

    Yao, Yi; Liu, Xu; Lin, Yuanfang; Zhang, Huangzhu; Zhang, Xiaojie; Liu, Xiangdong

    2005-01-01

    Since present display technology is projecting 3D to 2D, people's eyes are deceived by the loss of spatial data. So it's a revolution for human vision to develop a real 3D display device. The monitor is based on emissive pad with 64*256 LED array. When rotated at a frequency of 10 Hertz, it shows real 3D images with pixels at their exact positions. The article presents a procedure that the software possesses 3D object and converts to volumetric 3D formatted data for this system. For simulating the phenomenon on PC, it also presents a program remodels the object based on OpenGL. An algorithm for faster processing and optimizing rendering speed is also given. The monitor provides real 3D scenes with free visual angle. It can be expected that the revolution will bring a strike on modern monitors and will lead to a new world for display technology.

  12. JP3D compressed-domain watermarking of volumetric medical data sets

    NASA Astrophysics Data System (ADS)

    Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian

    2010-01-01

    Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.

  13. 3-D Volumetric Evaluation of Human Mandibular Growth

    PubMed Central

    Reynolds, Mathew; Reynolds, Michael; Adeeb, Samer; El-Bialy, Tarek

    2011-01-01

    Bone growth is a complex process that is controlled by a multitude of mechanisms that are not fully understood.Most of the current methods employed to measure the growth of bones focus on either studying cadaveric bones from different individuals of different ages, or successive two-dimensional (2D) radiographs. Both techniques have their known limitations. The purpose of this study was to explore a technique for quantifying the three dimensional (3D) growth of an adolescent human mandible over the period of one year utilizing cone beam computed tomography (CBCT) scans taken for regular orthodontic records. Three -dimensional virtual models were created from the CBCT data using mainstream medical imaging software. A comparison between computer-generated surface meshes of successive 3-D virtual models illustrates the magnitude of relative mandible growth. The results of this work are in agreement with previously reported data from human cadaveric studies and implantable marker studies. The presented method provides a new relatively simple basis (utilizing commercially available software) to visualize and evaluate individualized 3D (mandibular) growth in vivo. PMID:22046201

  14. 3-d volumetric evaluation of human mandibular growth.

    PubMed

    Reynolds, Mathew; Reynolds, Michael; Adeeb, Samer; El-Bialy, Tarek

    2011-01-01

    Bone growth is a complex process that is controlled by a multitude of mechanisms that are not fully understood.Most of the current methods employed to measure the growth of bones focus on either studying cadaveric bones from different individuals of different ages, or successive two-dimensional (2D) radiographs. Both techniques have their known limitations. The purpose of this study was to explore a technique for quantifying the three dimensional (3D) growth of an adolescent human mandible over the period of one year utilizing cone beam computed tomography (CBCT) scans taken for regular orthodontic records. Three -dimensional virtual models were created from the CBCT data using mainstream medical imaging software. A comparison between computer-generated surface meshes of successive 3-D virtual models illustrates the magnitude of relative mandible growth. The results of this work are in agreement with previously reported data from human cadaveric studies and implantable marker studies. The presented method provides a new relatively simple basis (utilizing commercially available software) to visualize and evaluate individualized 3D (mandibular) growth in vivo.

  15. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  16. Application of a 3D volumetric display for radiation therapy treatment planning I: quality assurance procedures.

    PubMed

    Gong, Xing; Kirk, Michael Collins; Napoli, Josh; Stutsman, Sandy; Zusag, Tom; Khelashvili, Gocha; Chu, James

    2009-07-17

    To design and implement a set of quality assurance tests for an innovative 3D volumetric display for radiation treatment planning applications. A genuine 3D display (Perspecta Spatial 3D, Actuality-Systems Inc., Bedford, MA) has been integrated with the Pinnacle TPS (Philips Medical Systems, Madison WI), for treatment planning. The Perspecta 3D display renders a 25 cm diameter volume that is viewable from any side, floating within a translucent dome. In addition to displaying all 3D data exported from Pinnacle, the system provides a 3D mouse to define beam angles and apertures and to measure distance. The focus of this work is the design and implementation of a quality assurance program for 3D displays and specific 3D planning issues as guided by AAPM Task Group Report 53. A series of acceptance and quality assurance tests have been designed to evaluate the accuracy of CT images, contours, beams, and dose distributions as displayed on Perspecta. Three-dimensional matrices, rulers and phantoms with known spatial dimensions were used to check Perspecta's absolute spatial accuracy. In addition, a system of tests was designed to confirm Perspecta's ability to import and display Pinnacle data consistently. CT scans of phantoms were used to confirm beam field size, divergence, and gantry and couch angular accuracy as displayed on Perspecta. Beam angles were verified through Cartesian coordinate system measurements and by CT scans of phantoms rotated at known angles. Beams designed on Perspecta were exported to Pinnacle and checked for accuracy. Dose at sampled points were checked for consistency with Pinnacle and agreed within 1% or 1 mm. All data exported from Pinnacle to Perspecta was displayed consistently. The 3D spatial display of images, contours, and dose distributions were consistent with Pinnacle display. When measured by the 3D ruler, the distances between any two points calculated using Perspecta agreed with Pinnacle within the measurement error.

  17. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  18. Key factors in the design of a LED volumetric 3D display system

    NASA Astrophysics Data System (ADS)

    Lin, Yuanfang; Liu, Xu; Yao, Yi; Zhang, Xiaojie; Liu, Xiangdong; Lin, Fengchun

    2005-01-01

    Through careful consideration of key factors that impact upon voxel attributes and image quality, a volumetric three-dimensional (3D) display system employing the rotation of a two-dimensional (2D) thin active panel was developed. It was designed as a lower-cost 3D visualization platform for experimentation and demonstration. Light emitting diodes (LEDs) were arranged into a 256x64 dot matrix on a single surface of the panel, which was positioned symmetrically about the axis of rotation. The motor and necessary supporting structures were located below the panel. LEDs individually of 500 ns response time, 1.6 mm×0.8 mm×0.6 mm external dimensions, 0.38 mm×0.43 mm horizontal and vertical spacing were adopted. The system is functional, providing 512×256×64, i.e. over 8 million addressable voxels within a 292 mm×165 mm cylindrical volume at a refresh frequency in excess of 16 Hz. Due to persistence of vision, momentarily addressed voxels will be perceived and fused into a 3D image. Many static or dynamic 3D scenes were displayed, which can be directly viewed from any position with few occlusion zones and dead zones. Important depth cues like binocular disparity and motion parallax are satisfied naturally.

  19. Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

    NASA Astrophysics Data System (ADS)

    Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.

    2014-12-01

    Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

  20. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  1. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  2. Average Cross-Sectional Area of DebriSat Fragments Using Volumetrically Constructed 3D Representations

    NASA Technical Reports Server (NTRS)

    Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.

    2016-01-01

    Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for

  3. Low Dose, Low Energy 3d Image Guidance during Radiotherapy

    NASA Astrophysics Data System (ADS)

    Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

    2006-04-01

    Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

  4. Cardiac Chamber Volumetric Assessment Using 3D Ultrasound - A Review.

    PubMed

    Pedrosa, João; Barbosa, Daniel; Almeida, Nuno; Bernard, Olivier; Bosch, Johan; D'hooge, Jan

    2016-01-01

    When designing clinical trials for testing novel cardiovascular therapies, it is highly relevant to understand what a given technology can provide in terms of information on the physiologic status of the heart and vessels. Ultrasound imaging has traditionally been the modality of choice to study the cardiovascular system as it has an excellent temporal resolution; it operates in real-time; it is very widespread and - not unimportant - it is cheap. Although this modality is mostly known clinically as a two-dimensional technology, it has recently matured into a true three-dimensional imaging technique. In this review paper, an overview is given of the available ultrasound technology for cardiac chamber quantification in terms of volume and function and evidence is given why these parameters are of value when testing the effect of new cardiovascular therapies.

  5. The effect of volumetric (3D) tactile symbols within inclusive tactile maps.

    PubMed

    Gual, Jaume; Puyuelo, Marina; Lloveras, Joaquim

    2015-05-01

    Point, linear and areal elements, which are two-dimensional and of a graphic nature, are the morphological elements employed when designing tactile maps and symbols for visually impaired users. However, beyond the two-dimensional domain, there is a fourth group of elements - volumetric elements - which mapmakers do not take sufficiently into account when it comes to designing tactile maps and symbols. This study analyses the effect of including volumetric, or 3D, symbols within a tactile map. In order to do so, the researchers compared two tactile maps. One of them uses only two-dimensional elements and is produced using thermoforming, one of the most popular systems in this field, while the other includes volumetric symbols, thus highlighting the possibilities opened up by 3D printing, a new area of production. The results of the study show that including 3D symbols improves the efficiency and autonomous use of these products.

  6. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  7. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  8. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  9. PSF engineering in multifocus microscopy for increased depth volumetric imaging

    PubMed Central

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-01-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  10. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  11. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  12. Integral volumetric imaging using decentered elemental lenses.

    PubMed

    Sawada, Shimpei; Kakeya, Hideki

    2012-11-05

    This paper proposes a high resolution integral imaging system using a lens array composed of non-uniform decentered elemental lenses. One of the problems of integral imaging is the trade-off relationship between the resolution and the number of views. When the number of views is small, motion parallax becomes strongly discrete to maintain the viewing angle. In order to overcome this trade-off, the proposed method uses the elemental lenses whose size is smaller than that of the elemental images. To keep the images generated by the elemental lenses at constant depth, the lens array is designed so that the optical centers of elemental lenses may be located in the centers of elemental images, not in the centers of elemental lenses. To compensate optical distortion, new image rendering algorithm is developed so that undistorted 3D image may be presented with a non-uniform lens array. The proposed design of lens array can be applied to integral volumetric imaging, where display panels are layered to show volumetric images in the scheme of integral imaging.

  13. Volumetric Echocardiographic Particle Image Velocimetry (V-Echo-PIV)

    NASA Astrophysics Data System (ADS)

    Falahatpisheh, Ahmad; Kheradvar, Arash

    2015-11-01

    Measurement of 3D flow field inside the cardiac chambers has proven to be a challenging task. Current laser-based 3D PIV methods estimate the third component of the velocity rather than directly measuring it and also cannot be used to image the opaque heart chambers. Modern echocardiography systems are equipped with 3D probes that enable imaging the entire 3D opaque field. However, this feature has not yet been employed for 3D vector characterization of blood flow. For the first time, we introduce a method that generates velocity vector field in 4D based on volumetric echocardiographic images. By assuming the conservation of brightness in 3D, blood speckles are tracked. A hierarchical 3D PIV method is used to account for large particle displacement. The discretized brightness transport equation is solved in a least square sense in interrogation windows of size 163 voxels. We successfully validate the method in analytical and experimental cases. Volumetric echo data of a left ventricle is then processed in the systolic phase. The expected velocity fields were successfully predicted by V-Echo-PIV. In this work, we showed a method to image blood flow in 3D based on volumetric images of human heart using no contrast agent.

  14. On the Uncertain Future of the Volumetric 3D Display Paradigm

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2017-06-01

    Volumetric displays permit electronically processed images to be depicted within a transparent physical volume and enable a range of cues to depth to be inherently associated with image content. Further, images can be viewed directly by multiple simultaneous observers who are able to change vantage positions in a natural way. On the basis of research to date, we assume that the technologies needed to implement useful volumetric displays able to support translucent image formation are available. Consequently, in this paper we review aspects of the volumetric paradigm and identify important issues which have, to date, precluded their successful commercialization. Potentially advantageous characteristics are outlined and demonstrate that significant research is still needed in order to overcome barriers which continue to hamper the effective exploitation of this display modality. Given the recent resurgence of interest in developing commercially viable general purpose volumetric systems, this discussion is of particular relevance.

  15. 3D imaging in forensic odontology.

    PubMed

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  16. Volumetric imaging of fish locomotion.

    PubMed

    Flammang, Brooke E; Lauder, George V; Troolin, Daniel R; Strand, Tyson E

    2011-10-23

    Fishes use multiple flexible fins in order to move and maintain stability in a complex fluid environment. We used a new approach, a volumetric velocimetry imaging system, to provide the first instantaneous three-dimensional views of wake structures as they are produced by freely swimming fishes. This new technology allowed us to demonstrate conclusively the linked ring vortex wake pattern that is produced by the symmetrical (homocercal) tail of fishes, and to visualize for the first time the three-dimensional vortex wake interaction between the dorsal and anal fins and the tail. We found that the dorsal and anal fin wakes were rapidly (within one tail beat) assimilated into the caudal fin vortex wake. These results show that volumetric imaging of biologically generated flow patterns can reveal new features of locomotor dynamics, and provides an avenue for future investigations of the diversity of fish swimming patterns and their hydrodynamic consequences.

  17. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  18. A GPU-accelerated 3D Coupled Sub-sample Estimation Algorithm for Volumetric Breast Strain Elastography.

    PubMed

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-01-31

    Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm 2.5 cm 2.5 cm]).

  19. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  20. An inverse hyper-spherical harmonics-based formulation for reconstructing 3D volumetric lung deformations

    NASA Astrophysics Data System (ADS)

    Santhanam, Anand P.; Min, Yugang; Mudur, Sudhir P.; Rastogi, Abhinav; Ruddy, Bari H.; Shah, Amish; Divo, Eduardo; Kassab, Alain; Rolland, Jannick P.; Kupelian, Patrick

    2010-07-01

    A method to estimate the deformation operator for the 3D volumetric lung dynamics of human subjects is described in this paper. For known values of air flow and volumetric displacement, the deformation operator and subsequently the elastic properties of the lung are estimated in terms of a Green's function. A Hyper-Spherical Harmonic (HSH) transformation is employed to compute the deformation operator. The hyper-spherical coordinate transformation method discussed in this paper facilitates accounting for the heterogeneity of the deformation operator using a finite number of frequency coefficients. Spirometry measurements are used to provide values for the airflow inside the lung. Using a 3D optical flow-based method, the 3D volumetric displacement of the left and right lungs, which represents the local anatomy and deformation of a human subject, was estimated from 4D-CT dataset. Results from an implementation of the method show the estimation of the deformation operator for the left and right lungs of a human subject with non-small cell lung cancer. Validation of the proposed method shows that we can estimate the Young's modulus of each voxel within a 2% error level.

  1. Realization of an aerial 3D image that occludes the background scenery.

    PubMed

    Kakeya, Hideki; Ishizuka, Shuta; Sato, Yuya

    2014-10-06

    In this paper we describe an aerial 3D image that occludes far background scenery based on coarse integral volumetric imaging (CIVI) technology. There have been many volumetric display devices that present floating 3D images, most of which have not reproduced the visual occlusion. CIVI is a kind of multilayered integral imaging and realizes an aerial volumetric image with visual occlusion by combining multiview and volumetric display technologies. The conventional CIVI, however, cannot show a deep space, for the number of layered panels is limited because of the low transmittance of each panel. To overcome this problem, we propose a novel optical design to attain an aerial 3D image that occludes far background scenery. In the proposed system, a translucent display panel with 120 Hz refresh rate is located between the CIVI system and the aerial 3D image. The system modulates between the aerial image mode and the background image mode. In the aerial image mode, the elemental images are shown on the CIVI display and the inserted translucent display is uniformly translucent. In the background image mode, the black shadows of the elemental images in a white background are shown on the CIVI display and the background scenery is displayed on the inserted translucent panel. By alternation of these two modes at 120 Hz, an aerial 3D image that visually occludes the far background scenery is perceived by the viewer.

  2. Parallel implementation of 3D FFT with volumetric decomposition schemes for efficient molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Imamura, Toshiyuki; Sugita, Yuji

    2016-03-01

    Three-dimensional Fast Fourier Transform (3D FFT) plays an important role in a wide variety of computer simulations and data analyses, including molecular dynamics (MD) simulations. In this study, we develop hybrid (MPI+OpenMP) parallelization schemes of 3D FFT based on two new volumetric decompositions, mainly for the particle mesh Ewald (PME) calculation in MD simulations. In one scheme, (1d_Alltoall), five all-to-all communications in one dimension are carried out, and in the other, (2d_Alltoall), one two-dimensional all-to-all communication is combined with two all-to-all communications in one dimension. 2d_Alltoall is similar to the conventional volumetric decomposition scheme. We performed benchmark tests of 3D FFT for the systems with different grid sizes using a large number of processors on the K computer in RIKEN AICS. The two schemes show comparable performances, and are better than existing 3D FFTs. The performances of 1d_Alltoall and 2d_Alltoall depend on the supercomputer network system and number of processors in each dimension. There is enough leeway for users to optimize performance for their conditions. In the PME method, short-range real-space interactions as well as long-range reciprocal-space interactions are calculated. Our volumetric decomposition schemes are particularly useful when used in conjunction with the recently developed midpoint cell method for short-range interactions, due to the same decompositions of real and reciprocal spaces. The 1d_Alltoall scheme of 3D FFT takes 4.7 ms to simulate one MD cycle for a virus system containing more than 1 million atoms using 32,768 cores on the K computer.

  3. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  4. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  5. Active segmentation of 3D axonal images.

    PubMed

    Muralidhar, Gautam S; Gopinath, Ajay; Bovik, Alan C; Ben-Yakar, Adela

    2012-01-01

    We present an active contour framework for segmenting neuronal axons on 3D confocal microscopy data. Our work is motivated by the need to conduct high throughput experiments involving microfluidic devices and femtosecond lasers to study the genetic mechanisms behind nerve regeneration and repair. While most of the applications for active contours have focused on segmenting closed regions in 2D medical and natural images, there haven't been many applications that have focused on segmenting open-ended curvilinear structures in 2D or higher dimensions. The active contour framework we present here ties together a well known 2D active contour model [5] along with the physics of projection imaging geometry to yield a segmented axon in 3D. Qualitative results illustrate the promise of our approach for segmenting neruonal axons on 3D confocal microscopy data.

  6. 3-D imaging of the CNS.

    PubMed

    Runge, V M; Gelblum, D Y; Wood, M L

    1990-01-01

    3-D gradient echo techniques, and in particular FLASH, represent a significant advance in MR imaging strategy allowing thin section, high resolution imaging through a large region of interest. Anatomical areas of application include the brain, spine, and extremities, although the majority of work to date has been performed in the brain. Superior T1 contrast and thus sensitivity to the presence of GdDTPA is achieved with 3-D FLASH when compared to 2-D spin echo technique. There is marked arterial and venous enhancement following Gd DTPA administration on 3-D FLASH, a less common finding with 2-D spin echo. Enhancement of the falx and tentorium is also more prominent. From a single data acquisition, requiring less than 11 min of scan time, high resolution reformatted sagittal, coronal, and axial images can obtained in addition to sections in any arbitrary plane. Tissue segmentation techniques can be applied and lesions displayed in three dimensions. These results may lead to the replacement of 2-D spin echo with 3-D FLASH for high resolution T1-weighted MR imaging of the CNS, particularly in the study of mass lesions and structural anomalies. The application of similar T2-weighted gradient echo techniques may follow, however the signal-to-noise ratio which can be achieved remains a potential limitation.

  7. Floating volumetric image formation using a dihedral corner reflector array device.

    PubMed

    Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuki; Yamamoto, Siori; Mukai, Takaaki; Maekawa, Satoshi

    2013-01-01

    A volumetric display system using an optical imaging device consisting of numerous dihedral corner reflectors placed perpendicular to the surface of a metal plate is proposed. Image formation by the dihedral corner reflector array (DCRA) is free from distortion and focal length. In the proposed volumetric display system, a two-dimensional real image is moved by a mirror scanner to scan a three-dimensional (3D) space. Cross-sectional images of a 3D object are displayed in accordance with the position of the image plane. A volumetric image is observed as a stack of the cross-sectional images. The use of the DCRA brings compact system configuration and volumetric real image generation with very low distortion. An experimental volumetric display system including a DCRA, a galvanometer mirror, and a digital micro-mirror device was constructed to verify the proposed method. A volumetric image consisting of 1024×768×400 voxels was formed by the experimental system.

  8. Volumetric CT-based segmentation of NSCLC using 3D-Slicer

    PubMed Central

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H.; van Baardwijk, Angela; Fennessy, Fiona M.; Lewis, John H.; De Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J. W. L.

    2013-01-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the “gold standard”. The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81–0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck. PMID:24346241

  9. Volumetric CT-based segmentation of NSCLC using 3D-Slicer

    NASA Astrophysics Data System (ADS)

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H.; van Baardwijk, Angela; Fennessy, Fiona M.; Lewis, John H.; de Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J. W. L.

    2013-12-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the ``gold standard''. The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81-0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck.

  10. Walker Ranch 3D seismic images

    SciTech Connect

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  11. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  12. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  13. Compression of medical volumetric datasets: physical and psychovisual performance comparison of the emerging JP3D standard and JPEG2000

    NASA Astrophysics Data System (ADS)

    Kimpe, T.; Bruylants, T.; Sneyders, Y.; Deklerck, R.; Schelkens, P.

    2007-03-01

    The size of medical data has increased significantly over the last few years. This poses severe problems for the rapid transmission of medical data across the hospital network resulting into longer access times of the images. Also longterm storage of data becomes more and more a problem. In an attempt to overcome the increasing data size often lossless or lossy compression algorithms are being used. This paper compares the existing JPEG2000 compression algorithm and the new emerging JP3D standard for compression of volumetric datasets. The main benefit of JP3D is that this algorithm truly is a 3D compression algorithm that exploits correlation not only within but also in between slices of a dataset. We evaluate both lossless and lossy modes of these algorithms. As a first step we perform an objective evaluation. Using RMSE and PSNR metrics we determine which compression algorithm performs best and this for multiple compression ratios and for several clinically relevant medical datasets. It is well known that RMSE and PSNR often do not correlate well with subjectively perceived image quality. Therefore we also perform a psycho visual analysis by means of a numerical observer. With this observer model we analyze how compression artifacts actually are perceived by a human observer. Results show superior performance of the new JP3D algorithm compared to the existing JPEG2000 algorithm.

  14. Feasibility of 3D harmonic contrast imaging.

    PubMed

    Voormolen, M M; Bouakaz, A; Krenning, B J; Lancée, C T; ten Cate, F J; de Jong, N

    2004-04-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suitable for contrast imaging. In this study the feasibility of 3D harmonic contrast imaging is evaluated in vitro. A commercially available tissue mimicking flow phantom was used in combination with Sonovue. Backscatter power spectra from a tissue and contrast region of interest were calculated from recorded radio frequency data. The spectra and the extracted contrast to tissue ratio from these spectra were used to optimize the excitation frequency, the pulse length and the receive filter settings of the transducer. Frequencies ranging from 1.66 to 2.35 MHz and pulse lengths of 1.5, 2 and 2.5 cycles were explored. An increase of more than 15 dB in the contrast to tissue ratio was found around the second harmonic compared with the fundamental level at an optimal excitation frequency of 1.74 MHz and a pulse length of 2.5 cycles. Using the optimal settings for 3D harmonic contrast recordings volume measurements of a left ventricular shaped agar phantom were performed. Without contrast the extracted volume data resulted in a volume error of 1.5%, with contrast an accuracy of 3.8% was achieved. The results show the feasibility of accurate volume measurements from 3D harmonic contrast images. Further investigations will include the clinical evaluation of the presented technique for improved assessment of the heart.

  15. 3D imaging system for biometric applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Abramovich, Gil; Paruchura, Vijay; Manickam, Swaminathan; Vemury, Arun

    2010-04-01

    There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.

  16. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  17. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  18. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  19. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  20. Quantification of thyroid volume using 3-D ultrasound imaging.

    PubMed

    Kollorz, E K; Hahn, D A; Linke, R; Goecke, T W; Hornegger, J; Kuwert, T

    2008-04-01

    Ultrasound (US) is among the most popular diagnostic techniques today. It is non-invasive, fast, comparably cheap, and does not require ionizing radiation. US is commonly used to examine the size, and structure of the thyroid gland. In clinical routine, thyroid imaging is usually performed by means of 2-D US. Conventional approaches for measuring the volume of the thyroid gland or its nodules may therefore be inaccurate due to the lack of 3-D information. This work reports a semi-automatic segmentation approach for the classification, and analysis of the thyroid gland based on 3-D US data. The images are scanned in 3-D, pre-processed, and segmented. Several pre-processing methods, and an extension of a commonly used geodesic active contour level set formulation are discussed in detail. The results obtained by this approach are compared to manual interactive segmentations by a medical expert in five representative patients. Our work proposes a novel framework for the volumetric quantification of thyroid gland lobes, which may also be expanded to other parenchymatous organs.

  1. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  2. The Derivation of Fault Volumetric Properties from 3D Trace Maps Using Outcrop Constrained Discrete Fracture Network Models

    NASA Astrophysics Data System (ADS)

    Hodgetts, David; Seers, Thomas

    2015-04-01

    Fault systems are important structural elements within many petroleum reservoirs, acting as potential conduits, baffles or barriers to hydrocarbon migration. Large, seismic-scale faults often serve as reservoir bounding seals, forming structural traps which have proved to be prolific plays in many petroleum provinces. Though inconspicuous within most seismic datasets, smaller subsidiary faults, commonly within the damage zones of parent structures, may also play an important role. These smaller faults typically form narrow, tabular low permeability zones which serve to compartmentalize the reservoir, negatively impacting upon hydrocarbon recovery. Though considerable improvements have been made in the visualization field to reservoir-scale fault systems with the advent of 3D seismic surveys, the occlusion of smaller scale faults in such datasets is a source of significant uncertainty during prospect evaluation. The limited capacity of conventional subsurface datasets to probe the spatial distribution of these smaller scale faults has given rise to a large number of outcrop based studies, allowing their intensity, connectivity and size distributions to be explored in detail. Whilst these studies have yielded an improved theoretical understanding of the style and distribution of sub-seismic scale faults, the ability to transform observations from outcrop to quantities that are relatable to reservoir volumes remains elusive. These issues arise from the fact that outcrops essentially offer a pseudo-3D window into the rock volume, making the extrapolation of surficial fault properties such as areal density (fracture length per unit area: P21), to equivalent volumetric measures (i.e. fracture area per unit volume: P32) applicable to fracture modelling extremely challenging. Here, we demonstrate an approach which harnesses advances in the extraction of 3D trace maps from surface reconstructions using calibrated image sequences, in combination with a novel semi

  3. Morphometrics, 3D Imaging, and Craniofacial Development

    PubMed Central

    Hallgrimsson, Benedikt; Percival, Christopher J.; Green, Rebecca; Young, Nathan M.; Mio, Washington; Marcucio, Ralph

    2017-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  4. Volumetric display system based on three-dimensional scanning of inclined optical image.

    PubMed

    Miyazaki, Daisuke; Shiba, Kensuke; Sotsuka, Koji; Matsushita, Kenji

    2006-12-25

    A volumetric display system based on three-dimensional (3D) scanning of an inclined image is reported. An optical image of a two-dimensional (2D) display, which is a vector-scan display monitor placed obliquely in an optical imaging system, is moved laterally by a galvanometric mirror scanner. Inclined cross-sectional images of a 3D object are displayed on the 2D display in accordance with the position of the image plane to form a 3D image. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision because they are real images formed in a 3D space. Experimental results of volumetric imaging from computed-tomography images and 3D animated images are presented.

  5. Inverse modeling of InSAR and ground leveling data for 3D volumetric strain distribution

    NASA Astrophysics Data System (ADS)

    Gallardo, L. A.; Glowacka, E.; Sarychikhina, O.

    2015-12-01

    Wide availability of modern Interferometric Synthetic aperture Radar (InSAR) data have made possible the extensive observation of differential surface displacements and are becoming an efficient tool for the detailed monitoring of terrain subsidence associated to reservoir dynamics, volcanic deformation and active tectonism. Unfortunately, this increasing popularity has not been matched by the availability of automated codes to estimate underground deformation, since many of them still rely on trial-error subsurface model building strategies. We posit that an efficient algorithm for the volumetric modeling of differential surface displacements should match the availability of current leveling and InSAR data and have developed an algorithm for the joint inversion of ground leveling and dInSAR data in 3D. We assume the ground displacements are originated by a stress free-volume strain distribution in a homogeneous elastic media and determined the displacement field associated to an ensemble of rectangular prisms. This formulation is then used to develop a 3D conjugate gradient inversion code that searches for the three-dimensional distribution of the volumetric strains that predict InSAR and leveling surface displacements simultaneously. The algorithm is regularized applying discontinuos first and zero order Thikonov constraints. For efficiency, the resulting computational code takes advantage of the resulting convolution integral associated to the deformation field and some basic tools for multithreading parallelization. We extensively test our algorithm on leveling and InSAR test and field data of the Northwest of Mexico and compare to some feasible geological scenarios of underground deformation.

  6. 3-D Deep Penetration Photoacoustic Imaging with a 2-D CMUT Array.

    PubMed

    Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Omer; Kamaya, Aya; Wygant, Ira O; Zhuang, Xuefeng; Gambhir, Sanjiv S; Jeffrey, R Brooke; Khuri-Yakub, Butrus T

    2010-10-11

    In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging.

  7. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  8. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  9. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  10. Visualization and volumetric structures from MR images of the brain

    SciTech Connect

    Parvin, B.; Johnston, W.; Robertson, D.

    1994-03-01

    Pinta is a system for segmentation and visualization of anatomical structures obtained from serial sections reconstructed from magnetic resonance imaging. The system approaches the segmentation problem by assigning each volumetric region to an anatomical structure. This is accomplished by satisfying constraints at the pixel level, slice level, and volumetric level. Each slice is represented by an attributed graph, where nodes correspond to regions and links correspond to the relations between regions. These regions are obtained by grouping pixels based on similarity and proximity. The slice level attributed graphs are then coerced to form a volumetric attributed graph, where volumetric consistency can be verified. The main novelty of our approach is in the use of the volumetric graph to ensure consistency from symbolic representations obtained from individual slices. In this fashion, the system allows errors to be made at the slice level, yet removes them when the volumetric consistency cannot be verified. Once the segmentation is complete, the 3D surfaces of the brain can be constructed and visualized.

  11. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  12. 3D ultrasound biomicroscopy for assessment of cartilage repair tissue: volumetric characterisation and correlation to established classification systems.

    PubMed

    Schöne, M; Männicke, N; Somerson, J S; Marquaß, B; Henkelmann, R; Mochida, J; Aigner, T; Raum, K; Schulz, R M

    2016-02-08

    Objective and sensitive assessment of cartilage repair outcomes lacks suitable methods. This study investigated the feasibility of 3D ultrasound biomicroscopy (UBM) to quantify cartilage repair outcomes volumetrically and their correlation with established classification systems. 32 sheep underwent bilateral treatment of a focal cartilage defect. One or two years post-operatively the repair outcomes were assessed and scored macroscopically (Outerbridge, ICRS-CRA), by magnetic resonance imaging (MRI, MOCART), and histopathology (O'Driscoll, ICRS-I and ICRS-II). The UBM data were acquired after MRI and used to reconstruct the shape of the initial cartilage layer, enabling the estimation of the initial cartilage thickness and defect volume as well as volumetric parameters for defect filling, repair tissue, bone loss and bone overgrowth. The quantification of the repair outcomes revealed high variations in the initial thickness of the cartilage layer, indicating the need for cartilage thickness estimation before creating a defect. Furthermore, highly significant correlations were found for the defect filling estimated from UBM to the established classification systems. 3D visualisation of the repair regions showed highly variable morphology within single samples. This raises the question as to whether macroscopic, MRI and histopathological scoring provide sufficient reliability. The biases of the individual methods will be discussed within this context. UBM was shown to be a feasible tool to evaluate cartilage repair outcomes, whereby the most important objective parameter is the defect filling. Translation of UBM into arthroscopic or transcutaneous ultrasound examinations would allow non-destructive and objective follow-up of individual patients and better comparison between the results of clinical trials.

  13. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  14. JPEG2000 Part 10: volumetric imaging

    NASA Astrophysics Data System (ADS)

    Schelkens, Peter; Brislawn, Christopher M.; Barbarien, Joeri; Munteanu, Adrian; Cornelis, Jan P.

    2003-11-01

    Recently, the JPEG2000 committee (ISO/IEC JTC1/SC29/WG1) decided to start up a new standardization activity to support the encoding of volumetric and floating-point data sets: Part 10 - Coding Volumetric and Floating-point Data (JP3D). This future standard will support functionalities like resolution and quality scalability and region-of-interest coding, while exploiting the entropy in the additional third dimension to improve the rate-distortion performance. In this paper, we give an overview of the markets and application areas targeted by JP3D, the imposed requirements and the considered algorithms with a specific focus on the realization of the region-of-interest functionality.

  15. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  16. Quantitative volumetric Raman imaging of three dimensional cell cultures

    PubMed Central

    Kallepitis, Charalambos; Bergholt, Mads S.; Mazo, Manuel M.; Leonardo, Vincent; Skaalure, Stacey C.; Maynard, Stephanie A.; Stevens, Molly M.

    2017-01-01

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell–material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy. PMID:28327660

  17. Quantitative volumetric Raman imaging of three dimensional cell cultures

    NASA Astrophysics Data System (ADS)

    Kallepitis, Charalambos; Bergholt, Mads S.; Mazo, Manuel M.; Leonardo, Vincent; Skaalure, Stacey C.; Maynard, Stephanie A.; Stevens, Molly M.

    2017-03-01

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell-material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  18. Robust volumetric change detection using mutual information with 3D fractals

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Akbari, Morris; Henning, Ronda; Pokorny, John

    2014-06-01

    We discuss a robust method for quantifying change of multi-temporal remote sensing point data in the presence of affine registration errors. Three dimensional image processing algorithms can be used to extract and model an electronic module, consisting of a self-contained assembly of electronic components and circuitry, using an ultrasound scanning sensor. Mutual information (MI) is an effective measure of change. We propose a multi-resolution 3D fractal algorithm which is a novel extension to MI or regional mutual information (RMI). Our method is called fractal mutual information (FMI). This extension efficiently takes neighborhood fractal patterns of corresponding voxels (3D pixels) into account. The goal of this system is to quantify the change in a module due to tampering and provide a method for quantitative and qualitative change detection and analysis.

  19. Volumetric and surface-based 3D MRI analyses of fetal isolated mild ventriculomegaly: brain morphometry in ventriculomegaly.

    PubMed

    Scott, Julia A; Habas, Piotr A; Rajagopalan, Vidya; Kim, Kio; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2013-05-01

    Diagnosis of fetal isolated mild ventriculomegaly (IMVM) is the most common brain abnormality on prenatal ultrasound. We have set to identify potential alterations in brain development specific to IMVM in tissue volume and cortical and ventricular local surface curvature derived from in utero magnetic resonance imaging (MRI). Multislice 2D T2-weighted MRI were acquired from 32 fetuses (16 IMVM, 16 controls) between 22 and 25.5 gestational weeks. The images were motion-corrected and reconstructed into 3D volumes for volumetric and curvature analyses. The brain images were automatically segmented into cortical plate, cerebral mantle, deep gray nuclei, and ventricles. Volumes were compared between IMVM and control subjects. Surfaces were extracted from the segmentations for local mean surface curvature measurement on the inner cortical plate and the ventricles. Linear models were estimated for age-related and ventricular volume-associated changes in local curvature in both the inner cortical plate and ventricles. While ventricular volume was enlarged in IMVM, all other tissue volumes were not different from the control group. Ventricles increased in curvature with age along the atrium and anterior body. Increasing ventricular volume was associated with reduced curvature over most of the ventricular surface. The cortical plate changed in curvature with age at multiple sites of primary sulcal formation. Reduced cortical folding was detected near the parieto-occipital sulcus in IMVM subjects. While tissue volume appears to be preserved in brains with IMVM, cortical folding may be affected in regions where ventricles are dilated.

  20. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  1. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.

    PubMed

    Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Zheng, Yefeng; Hornegger, Joachim; Comaniciu, Dorin

    2016-03-07

    Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. There are two main challenges that need to be addressed, these are the efficiency in processing large volumetric input images and the need for strong, representative image features. When the object of interest is parametrized in a high dimensional space, standard volume scanning techniques do not scale up to the enormous number of potential hypotheses and representative image features are subject to significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. Deep learning systems automatically identify, disentangle and learn explanatory attributes directly from low-level image data, however their application in the volumetric setting is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D (3 for each location, orientation, and scale) resulting in a prohibitive number of scanning hypotheses, in the order of billions for typical sampling. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality, for example starting from location only (3D

  2. Scanners and drillers: Characterizing expert visual search through volumetric images

    PubMed Central

    Drew, Trafton; Vo, Melissa Le-Hoa; Olwal, Alex; Jacobson, Francine; Seltzer, Steven E.; Wolfe, Jeremy M.

    2013-01-01

    Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a “stack” of 2-D chest CT “slices.” At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: “drilling” and “scanning.” Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated. PMID:23922445

  3. Scanners and drillers: characterizing expert visual search through volumetric images.

    PubMed

    Drew, Trafton; Vo, Melissa Le-Hoa; Olwal, Alex; Jacobson, Francine; Seltzer, Steven E; Wolfe, Jeremy M

    2013-08-06

    Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a "stack" of 2-D chest CT "slices." At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: "drilling" and "scanning." Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated.

  4. Research of range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Yang, Haitao; Zhao, Hongli; Youchen, Fan

    2016-10-01

    Laser image data-based target recognition technology is one of the key technologies of laser active imaging systems. This paper discussed the status quo of 3-D imaging development at home and abroad, analyzed the current technological bottlenecks, and built a prototype of range-gated systems to obtain a set of range-gated slice images, and then constructed the 3-D images of the target by binary method and centroid method, respectively, and by constructing different numbers of slice images explored the relationship between the number of images and the reconstruction accuracy in the 3-D image reconstruction process. The experiment analyzed the impact of two algorithms, binary method and centroid method, on the results of 3-D image reconstruction. In the binary method, a comparative analysis was made on the impact of different threshold values on the results of reconstruction, where 0.1, 0.2, 0.3 and adaptive threshold values were selected for 3-D reconstruction of the slice images. In the centroid method, 15, 10, 6, 3, and 2 images were respectively used to realize 3-D reconstruction. Experimental results showed that with the same number of slice images, the accuracy of centroid method was higher than the binary algorithm, and the binary algorithm had a large dependence on the selection of threshold; with the number of slice images dwindling, the accuracy of images reconstructed by centroid method continued to reduce, and at least three slice images were required in order to obtain one 3-D image.

  5. Soft-tissue volumetric changes following monobloc distraction procedure: analysis using digital three-dimensional photogrammetry system (3dMD).

    PubMed

    Chan, Fuan Chiang; Kawamoto, Henry K; Federico, Christina; Bradley, James P

    2013-03-01

    We have previously reported that monobloc advancement by distraction osteogenesis resulted in decreased morbidity and greater advancement with less relapse compared with acute monobloc advancement with bone grafting. In this study, we examine the three-dimensional (3D) volumetric soft-tissue changes in monobloc distraction.Patients with syndromic craniosynostosis who underwent monobloc distraction from 2002 to 2010 at University of California-Los Angeles Craniofacial Center were studied (n = 12). We recorded diagnosis, indications for the surgery, and volumetric changes for skeletal and soft-tissue midface structures (preoperative/postoperative [6 weeks]/follow-up [>1 year]). Computed tomography scans and a digital 3D photogrammetry system were used for image analysis.Patients ranged from 6 to 14 years of age (mean, 10.1 years) at the time of the operation (follow-up 2-11 years); mean distraction advancement was 19.4 mm (range, 14-25 mm). There was a mean increase in the 3D volumetric soft-tissue changes: 99.5 ± 4.0 cm(3) (P < 0.05) at 6 weeks and 94.9 ± 3.6 cm(3) (P < 0.05) at 1-year follow-up. When comparing soft-tissue changes at 6 weeks postoperative to 1-year follow-up, there were minimal relapse changes. The overall mean 3D skeletal change was 108.9 ± 4.2 cm. For every 1 cm of skeletal gain, there was 0.78 cm(3) of soft-tissue gain.Monobloc advancement by distraction osteogenesis using internal devices resulted in increased volumetric soft-tissue changes, which remained stable at 1 year. The positive linear correlation between soft-tissue increments and bony advancement can be incorporated during the planning of osteotomies to achieve optimum surgical outcomes with monobloc distraction.

  6. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  7. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  8. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  9. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  10. Whole-cell, multicolor superresolution imaging using volumetric multifocus microscopy

    PubMed Central

    Hajj, Bassam; Wisniewski, Jan; El Beheiry, Mohamed; Chen, Jiji; Revyakin, Andrey; Wu, Carl; Dahan, Maxime

    2014-01-01

    Single molecule-based superresolution imaging has become an essential tool in modern cell biology. Because of the limited depth of field of optical imaging systems, one of the major challenges in superresolution imaging resides in capturing the 3D nanoscale morphology of the whole cell. Despite many previous attempts to extend the application of photo-activated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) techniques into three dimensions, effective localization depths do not typically exceed 1.2 µm. Thus, 3D imaging of whole cells (or even large organelles) still demands sequential acquisition at different axial positions and, therefore, suffers from the combined effects of out-of-focus molecule activation (increased background) and bleaching (loss of detections). Here, we present the use of multifocus microscopy for volumetric multicolor superresolution imaging. By simultaneously imaging nine different focal planes, the multifocus microscope instantaneously captures the distribution of single molecules (either fluorescent proteins or synthetic dyes) throughout an ∼4-µm-deep volume, with lateral and axial localization precisions of ∼20 and 50 nm, respectively. The capabilities of multifocus microscopy to rapidly image the 3D organization of intracellular structures are illustrated by superresolution imaging of the mammalian mitochondrial network and yeast microtubules during cell division. PMID:25422417

  11. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  12. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  13. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  14. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  15. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  16. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  17. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  18. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  19. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  20. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  1. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-09

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

  2. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  3. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  4. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  5. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  6. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  7. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  8. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  9. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.

    PubMed

    Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin

    2016-05-01

    Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45

  10. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    DTIC Science & Technology

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  11. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  12. All Photons Imaging Through Volumetric Scattering

    PubMed Central

    Satat, Guy; Heshmat, Barmak; Raviv, Dan; Raskar, Ramesh

    2016-01-01

    Imaging through thick highly scattering media (sample thickness ≫ mean free path) can realize broad applications in biomedical and industrial imaging as well as remote sensing. Here we propose a computational “All Photons Imaging” (API) framework that utilizes time-resolved measurement for imaging through thick volumetric scattering by using both early arrived (non-scattered) and diffused photons. As opposed to other methods which aim to lock on specific photons (coherent, ballistic, acoustically modulated, etc.), this framework aims to use all of the optical signal. Compared to conventional early photon measurements for imaging through a 15 mm tissue phantom, our method shows a two fold improvement in spatial resolution (4db increase in Peak SNR). This all optical, calibration-free framework enables widefield imaging through thick turbid media, and opens new avenues in non-invasive testing, analysis, and diagnosis. PMID:27683065

  13. All Photons Imaging Through Volumetric Scattering

    NASA Astrophysics Data System (ADS)

    Satat, Guy; Heshmat, Barmak; Raviv, Dan; Raskar, Ramesh

    2016-09-01

    Imaging through thick highly scattering media (sample thickness ≫ mean free path) can realize broad applications in biomedical and industrial imaging as well as remote sensing. Here we propose a computational “All Photons Imaging” (API) framework that utilizes time-resolved measurement for imaging through thick volumetric scattering by using both early arrived (non-scattered) and diffused photons. As opposed to other methods which aim to lock on specific photons (coherent, ballistic, acoustically modulated, etc.), this framework aims to use all of the optical signal. Compared to conventional early photon measurements for imaging through a 15 mm tissue phantom, our method shows a two fold improvement in spatial resolution (4db increase in Peak SNR). This all optical, calibration-free framework enables widefield imaging through thick turbid media, and opens new avenues in non-invasive testing, analysis, and diagnosis.

  14. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  15. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy

    SciTech Connect

    Tsai, Tsung-Yuan; Lu, Tung-Wu; Chen, Chung-Ming; Kuo, Mei-Ying; Hsu, Horng-Chaung

    2010-03-15

    Purpose: Accurate measurement of the three-dimensional (3D) rigid body and surface kinematics of the natural human knee is essential for many clinical applications. Existing techniques are limited either in their accuracy or lack more realistic experimental evaluation of the measurement errors. The purposes of the study were to develop a volumetric model-based 2D to 3D registration method, called the weighted edge-matching score (WEMS) method, for measuring natural knee kinematics with single-plane fluoroscopy to determine experimentally the measurement errors and to compare its performance with that of pattern intensity (PI) and gradient difference (GD) methods. Methods: The WEMS method gives higher priority to matching of longer edges of the digitally reconstructed radiograph and fluoroscopic images. The measurement errors of the methods were evaluated based on a human cadaveric knee at 11 flexion positions. Results: The accuracy of the WEMS method was determined experimentally to be less than 0.77 mm for the in-plane translations, 3.06 mm for out-of-plane translation, and 1.13 deg. for all rotations, which is better than that of the PI and GD methods. Conclusions: A new volumetric model-based 2D to 3D registration method has been developed for measuring 3D in vivo kinematics of natural knee joints with single-plane fluoroscopy. With the equipment used in the current study, the accuracy of the WEMS method is considered acceptable for the measurement of the 3D kinematics of the natural knee in clinical applications.

  16. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  17. 3D Whole Heart Imaging for Congenital Heart Disease

    PubMed Central

    Greil, Gerald; Tandon, Animesh (Aashoo); Silva Vieira, Miguel; Hussain, Tarique

    2017-01-01

    Three-dimensional (3D) whole heart techniques form a cornerstone in cardiovascular magnetic resonance imaging of congenital heart disease (CHD). It offers significant advantages over other CHD imaging modalities and techniques: no ionizing radiation; ability to be run free-breathing; ECG-gated dual-phase imaging for accurate measurements and tissue properties estimation; and higher signal-to-noise ratio and isotropic voxel resolution for multiplanar reformatting assessment. However, there are limitations, such as potentially long acquisition times with image quality degradation. Recent advances in and current applications of 3D whole heart imaging in CHD are detailed, as well as future directions. PMID:28289674

  18. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  19. A colour image reproduction framework for 3D colour printing

    NASA Astrophysics Data System (ADS)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  20. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  1. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  2. Computational assessment of visual search strategies in volumetric medical images

    PubMed Central

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M.; Haygood, Tamara Miner; Markey, Mia K.

    2016-01-01

    Abstract. When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: “drilling” (restrict eye movements to a small region of the image while quickly scrolling through slices), or “scanning” (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either “drilling” or “scanning” when searching for target T’s in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers’ gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers’ fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus “drilling” may be more efficient than “scanning.” PMID:26759815

  3. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  4. Digital holography and 3D imaging: introduction to feature issue.

    PubMed

    Kim, Myung K; Hayasaki, Yoshio; Picart, Pascal; Rosen, Joseph

    2013-01-01

    This feature issue of Applied Optics on Digital Holography and 3D Imaging is the sixth of an approximately annual series. Forty-seven papers are presented, covering a wide range of topics in phase-shifting methods, low coherence methods, particle analysis, biomedical imaging, computer-generated holograms, integral imaging, and many others.

  5. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  6. a Geometric Processing Workflow for Transforming Reality-Based 3d Models in Volumetric Meshes Suitable for Fea

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Guidi, G.

    2017-02-01

    Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes. The focus of this paper is to present a new method aiming at generate the most accurate 3D representation of a real artefact from highly accurate 3D digital models derived from reality-based techniques, maintaining the accuracy of the high-resolution polygonal models in the solid ones. The approach proposed is based on a wise use of retopology procedures and a transformation of this model to a mathematical one made by NURBS surfaces suitable for being processed by volumetric meshers typically embedded in standard FEM packages. The strong simplification with little loss of consistency possible with the retopology step is used for maintaining as much coherence as possible between the original acquired mesh and the simplified model, creating in the meantime a topology that is more favourable for the automatic NURBS conversion.

  7. Progresses in 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Navarro, Héctor; Pons, Amparo; Javidi, Bahram

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  8. DCT and DST Based Image Compression for 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  9. Diverging Wave Volumetric Imaging Using Subaperture Beamforming.

    PubMed

    Santos, Pedro; Haugen, Geir Ultveit; Lovstakken, Lasse; Samset, Eigil; D'hooge, Jan

    2016-12-01

    Several clinical settings could benefit from 3-D high frame rate (HFR) imaging and, in particular, HFR 3-D tissue Doppler imaging (TDI). To date, the proposed methodologies are based mostly on experimental ultrasound platforms, making their translation to clinical systems nontrivial as these have additional hardware constraints. In particular, clinically used 2-D matrix array transducers rely on subaperture (SAP) beamforming to limit cabling between the ultrasound probe and the back-end console. Therefore, this paper is aimed at assessing the feasibility of HFR 3-D TDI using diverging waves (DWs) on a clinical transducer with SAP beamforming limitations. Simulation studies showed that the combination of a single DW transmission with SAP beamforming results in severe imaging artifacts due to grating lobes and reduced penetration. Interestingly, a promising tradeoff between image quality and frame rate was achieved for scan sequences with a moderate number of transmit beams. In particular, a sparse sequence with nine transmissions showed good imaging performance for an imaging sector of 70 (°)×70 (°) at volume rates of approximately 600 Hz. Subsequently, this sequence was implemented in a clinical system and TDI was recorded in vivo on healthy subjects. Velocity curves were extracted and compared against conventional TDI (i.e., with focused transmit beams). The results showed similar velocities between both beamforming approaches, with a cross-correlation of 0.90 ± 0.11 between the traces of each mode. Overall, this paper indicates that HFR 3-D TDI is feasible in systems with clinical 2-D matrix arrays, despite the limitations of SAP beamforming.

  10. 3D Subharmonic Ultrasound Imaging In Vitro and In Vivo

    PubMed Central

    Eisenbrey, John R.; Sridharan, Anush; Machado, Priscilla; Zhao, Hongjia; Halldorsdottir, Valgerdur G.; Dave, Jaydev K.; Liu, Ji-Bin; Park, Suhyun; Dianis, Scott; Wallace, Kirk; Thomenius, Kai E.; Forsberg, F.

    2012-01-01

    Rationale and Objectives While contrast-enhanced ultrasound imaging techniques such as harmonic imaging (HI) have evolved to reduce tissue signals using the nonlinear properties of the contrast agent, levels of background suppression have been mixed. Subharmonic imaging (SHI) offers near-complete tissue suppression by centering the receive bandwidth at half the transmitting frequency. In this work we demonstrate the feasibility of 3D SHI and compare it to 3D HI. Materials and Methods 3D HI and SHI were implemented on a Logiq 9 ultrasound scanner (GE Healthcare, Milwaukee, Wisconsin) with a 4D10L probe. Four-cycle SHI was implemented to transmit at 5.8 MHz and receive at 2.9 MHz, while 2-cycle HI was implemented to transmit at 5 MHz and receive at 10 MHz. The ultrasound contrast agent Definity (Lantheus Medical Imaging, North Billerica, MA) was imaged within a flow phantom and the lower pole of two canine kidneys in both HI and SHI modes. Contrast to tissue ratios (CTR) and rendered images were compared offline. Results SHI resulted in significant improvement in CTR levels relative to HI both in vitro (12.11±0.52 vs. 2.67±0.77, p<0.001) and in vivo (5.74±1.92 vs. 2.40±0.48, p=0.04). Rendered 3D SHI images provided better tissue suppression and a greater overall view of vessels in a flow phantom and canine renal vasculature. Conclusions The successful implementation of SHI in 3D allows imaging of vascular networks over a heterogeneous sample volume and should improve future diagnostic accuracy. Additionally, 3D SHI provides improved CTR values relative to 3D HI. PMID:22464198

  11. Influence of georeference for saturated excess overland flow modelling using 3D volumetric soft geo-objects

    NASA Astrophysics Data System (ADS)

    Izham, Mohamad Yusoff; Muhamad Uznir, Ujang; Alias, Abdul Rahman; Ayob, Katimon; Wan Ruslan, Ismail

    2011-04-01

    Existing 2D data structures are often insufficient for analysing the dynamism of saturation excess overland flow (SEOF) within a basin. Moreover, all stream networks and soil surface structures in GIS must be preserved within appropriate projection plane fitting techniques known as georeferencing. Inclusion of 3D volumetric structure of the current soft geo-objects simulation model would offer a substantial effort towards representing 3D soft geo-objects of SEOF dynamically within a basin by visualising saturated flow and overland flow volume. This research attempts to visualise the influence of a georeference system towards the dynamism of overland flow coverage and total overland flow volume generated from the SEOF process using VSG data structure. The data structure is driven by Green-Ampt methods and the Topographic Wetness Index (TWI). VSGs are analysed by focusing on spatial object preservation techniques of the conformal-based Malaysian Rectified Skew Orthomorphic (MRSO) and the equidistant-based Cassini-Soldner projection plane under the existing geodetic Malaysian Revised Triangulation 1948 (MRT48) and the newly implemented Geocentric Datum for Malaysia (GDM2000) datum. The simulated result visualises deformation of SEOF coverage under different georeference systems via its projection planes, which delineate dissimilar computation of SEOF areas and overland flow volumes. The integration of Georeference, 3D GIS and the saturation excess mechanism provides unifying evidence towards successful landslide and flood disaster management through envisioning the streamflow generating process (mainly SEOF) in a 3D environment.

  12. Accelerated 3D catheter visualization from triplanar MR projection images.

    PubMed

    Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

    2010-07-01

    One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment.

  13. Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations

    PubMed Central

    Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

    2008-01-01

    We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost. PMID:17024836

  14. Exposing digital image forgeries by 3D reconstruction technology

    NASA Astrophysics Data System (ADS)

    Wang, Yongqiang; Xu, Xiaojing; Li, Zhihui; Liu, Haizhen; Li, Zhigang; Huang, Wei

    2009-11-01

    Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method, based on 3D reconstruction technology, which detect the forgeries by discriminating the dimensional relationship of each object appeared on image, is presented in this paper. This detection method includes three steps. In the first step, all the parameters of images were calibrated and each crucial object on image was chosen and matched. In the second step, the 3D coordinates of each object were calculated by bundle adjustment. In final step, the dimensional relationship of each object was analyzed. Experiments were designed to test this detection method; the 3D reconstruction and the forged image 3D reconstruction were computed independently. Test results show that the fabricating character in digital forgeries can be identified intuitively by this method.

  15. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  16. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  17. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  18. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  19. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  20. 3D Image Reconstruction: Determination of Pattern Orientation

    SciTech Connect

    Blankenbecler, Richard

    2003-03-13

    The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.

  1. Accuracy of 3D Imaging Software in Cephalometric Analysis

    DTIC Science & Technology

    2013-06-21

    Imaging and Communication in Medicine ( DICOM ) files into personal computer-based software to enable 3D reconstruction of the craniofacial skeleton. These...tissue profile. CBCT data can be imported as DICOM files into personal computer–based software to provide 3D reconstruction of the craniofacial...been acquired for the three pig models. The CBCT data were exported into DICOM multi-file format. They will be imported into a proprietary

  2. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  3. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  4. Gastric Contraction Imaging System Using a 3-D Endoscope.

    PubMed

    Yoshimoto, Kayo; Yamada, Kenji; Watabe, Kenji; Takeda, Maki; Nishimura, Takahiro; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Nishida, Tsutomu; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko

    2014-01-01

    This paper presents a gastric contraction imaging system for assessment of gastric motility using a 3-D endoscope. Gastrointestinal diseases are mainly based on morphological abnormalities. However, gastrointestinal symptoms are sometimes apparent without visible abnormalities. One of the major factors for these diseases is abnormal gastrointestinal motility. For assessment of gastric motility, a gastric motility imaging system is needed. To assess the dynamic motility of the stomach, the proposed system measures 3-D gastric contractions derived from a 3-D profile of the stomach wall obtained with a developed 3-D endoscope. After obtaining contraction waves, their frequency, amplitude, and speed of propagation can be calculated using a Gaussian function. The proposed system was evaluated for 3-D measurements of several objects with known geometries. The results showed that the surface profiles could be obtained with an error of [Formula: see text] of the distance between two different points on images. Subsequently, we evaluated the validity of a prototype system using a wave simulated model. In the experiment, the amplitude and position of waves could be measured with 1-mm accuracy. The present results suggest that the proposed system can measure the speed and amplitude of contractions. This system has low invasiveness and can assess the motility of the stomach wall directly in a 3-D manner. Our method can be used for examination of gastric morphological and functional abnormalities.

  5. Three-dimensional reconstruction and characterization of human external shapes from two-dimensional images using volumetric methods.

    PubMed

    Azevedo, Teresa C S; Tavares, João Manuel R S; Vaz, Mário A P

    2010-06-01

    This work presents a volumetric approach to reconstruct and characterise 3D models of external anatomical structures from 2D images. Volumetric methods represent the final volume using a finite set of 3D geometric primitives, usually designed as voxels. Thus, from an image sequence acquired around the object to reconstruct, the images are calibrated and the 3D models of the referred object are built using different approaches of volumetric methods. The final goal is to analyse the accuracy of the obtained models when modifying some of the parameters of the considered volumetric methods, such as the type of voxel projection (rectangular or accurate), the way the consistency of the voxels is tested (only silhouettes or silhouettes and photo-consistency) and the initial size of the reconstructed volume.

  6. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  7. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  8. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  9. 3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-12-01

    Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images.

  10. Analysis of a 3-D system function measured for magnetic particle imaging.

    PubMed

    Rahmer, Jürgen; Weizenecker, Jürgen; Gleich, Bernhard; Borgert, Jörn

    2012-06-01

    Magnetic particle imaging (MPI) is a new tomographic imaging approach that can quantitatively map magnetic nanoparticle distributions in vivo. It is capable of volumetric real-time imaging at particle concentrations low enough to enable clinical applications. For image reconstruction in 3-D MPI, a system function (SF) is used, which describes the relation between the acquired MPI signal and the spatial origin of the signal. The SF depends on the instrumental configuration, the applied field sequence, and the magnetic particle characteristics. Its properties reflect the quality of the spatial encoding process. This work presents a detailed analysis of a measured SF to give experimental evidence that 3-D MPI encodes information using a set of 3-D spatial patterns or basis functions that is stored in the SF. This resembles filling 3-D k-space in magnetic resonance imaging, but is faster since all information is gathered simultaneously over a broad acquisition bandwidth. A frequency domain analysis shows that the finest structures that can be encoded with the presented SF are as small as 0.6 mm. SF simulations are performed to demonstrate that larger particle cores extend the set of basis functions towards higher resolution and that the experimentally observed spatial patterns require the existence of particles with core sizes of about 30 nm in the calibration sample. A simple formula is presented that qualitatively describes the basis functions to be expected at a certain frequency.

  11. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  12. 3D quantitative analysis of brain SPECT images

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Ceskovic, Ivan; Petrovic, Ratimir; Loncaric, Srecko

    2001-07-01

    The main purpose of this work is to develop a computer-based technique for quantitative analysis of 3-D brain images obtained by single photon emission computed tomography (SPECT). In particular, the volume and location of ischemic lesion and penumbra is important for early diagnosis and treatment of infracted regions of the brain. SPECT imaging is typically used as diagnostic tool to assess the size and location of the ischemic lesion. The segmentation method presented in this paper utilizes a 3-D deformable model in order to determine size and location of the regions of interest. The evolution of the model is computed using a level-set implementation of the algorithm. In addition to 3-D deformable model the method utilizes edge detection and region growing for realization of a pre-processing. Initial experimental results have shown that the method is useful for SPECT image analysis.

  13. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  14. Breast density measurement: 3D cone beam computed tomography (CBCT) images versus 2D digital mammograms

    NASA Astrophysics Data System (ADS)

    Han, Tao; Lai, Chao-Jen; Chen, Lingyun; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Yang, Wei T.; Shaw, Chris C.

    2009-02-01

    Breast density has been recognized as one of the major risk factors for breast cancer. However, breast density is currently estimated using mammograms which are intrinsically 2D in nature and cannot accurately represent the real breast anatomy. In this study, a novel technique for measuring breast density based on the segmentation of 3D cone beam CT (CBCT) images was developed and the results were compared to those obtained from 2D digital mammograms. 16 mastectomy breast specimens were imaged with a bench top flat-panel based CBCT system. The reconstructed 3D CT images were corrected for the cupping artifacts and then filtered to reduce the noise level, followed by using threshold-based segmentation to separate the dense tissue from the adipose tissue. For each breast specimen, volumes of the dense tissue structures and the entire breast were computed and used to calculate the volumetric breast density. BI-RADS categories were derived from the measured breast densities and compared with those estimated from conventional digital mammograms. The results show that in 10 of 16 cases the BI-RADS categories derived from the CBCT images were lower than those derived from the mammograms by one category. Thus, breasts considered as dense in mammographic examinations may not be considered as dense with the CBCT images. This result indicates that the relation between breast cancer risk and true (volumetric) breast density needs to be further investigated.

  15. Oxygen- and Nitrogen-Enriched 3D Porous Carbon for Supercapacitors of High Volumetric Capacity.

    PubMed

    Li, Jia; Liu, Kang; Gao, Xiang; Yao, Bin; Huo, Kaifu; Cheng, Yongliang; Cheng, Xiaofeng; Chen, Dongchang; Wang, Bo; Sun, Wanmei; Ding, Dong; Liu, Meilin; Huang, Liang

    2015-11-11

    Efficient utilization and broader commercialization of alternative energies (e.g., solar, wind, and geothermal) hinges on the performance and cost of energy storage and conversion systems. For now and in the foreseeable future, the combination of rechargeable batteries and electrochemical capacitors remains the most promising option for many energy storage applications. Porous carbonaceous materials have been widely used as an electrode for batteries and supercapacitors. To date, however, the highest specific capacitance of an electrochemical double layer capacitor is only ∼200 F/g, although a wide variety of synthetic approaches have been explored in creating optimized porous structures. Here, we report our findings in the synthesis of porous carbon through a simple, one-step process: direct carbonization of kelp in an NH3 atmosphere at 700 °C. The resulting oxygen- and nitrogen-enriched carbon has a three-dimensional structure with specific surface area greater than 1000 m(2)/g. When evaluated as an electrode for electrochemical double layer capacitors, the porous carbon structure demonstrated excellent volumetric capacitance (>360 F/cm(3)) with excellent cycling stability. This simple approach to low-cost carbonaceous materials with unique architecture and functionality could be a promising alternative to fabrication of porous carbon structures for many practical applications, including batteries and fuel cells.

  16. Interactive visualization of multiresolution image stacks in 3D.

    PubMed

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  17. Episcopic 3D Imaging Methods: Tools for Researching Gene Function

    PubMed Central

    Weninger, Wolfgang J; Geyer, Stefan H

    2008-01-01

    This work aims at describing episcopic 3D imaging methods and at discussing how these methods can contribute to researching the genetic mechanisms driving embryogenesis and tissue remodelling, and the genesis of pathologies. Several episcopic 3D imaging methods exist. The most advanced are capable of generating high-resolution volume data (voxel sizes from 0.5x0.5x1 µm upwards) of small to large embryos of model organisms and tissue samples. Beside anatomy and tissue architecture, gene expression and gene product patterns can be three dimensionally analyzed in their precise anatomical and histological context with the aid of whole mount in situ hybridization or whole mount immunohistochemical staining techniques. Episcopic 3D imaging techniques were and are employed for analyzing the precise morphological phenotype of experimentally malformed, randomly produced, or genetically engineered embryos of biomedical model organisms. It has been shown that episcopic 3D imaging also fits for describing the spatial distribution of genes and gene products during embryogenesis, and that it can be used for analyzing tissue samples of adult model animals and humans. The latter offers the possibility to use episcopic 3D imaging techniques for researching the causality and treatment of pathologies or for staging cancer. Such applications, however, are not yet routine and currently only preliminary results are available. We conclude that, although episcopic 3D imaging is in its very beginnings, it represents an upcoming methodology, which in short terms will become an indispensable tool for researching the genetic regulation of embryo development as well as the genesis of malformations and diseases. PMID:19452045

  18. Proposed traceable structural resolution protocols for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David; Beraldin, J.-Angelo; Cournoyer, Luc; Carrier, Benjamin; Blais, François

    2009-08-01

    A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.

  19. 3D volumetric modeling of grapevine biomass using Tripod LiDAR

    USGS Publications Warehouse

    Keightley, K.E.; Bawden, G.W.

    2010-01-01

    Tripod mounted laser scanning provides the means to generate high-resolution volumetric measures of vegetation structure and perennial woody tissue for the calculation of standing biomass in agronomic and natural ecosystems. Other than costly destructive harvest methods, no technique exists to rapidly and accurately measure above-ground perennial tissue for woody plants such as Vitis vinifera (common grape vine). Data collected from grapevine trunks and cordons were used to study the accuracy of wood volume derived from laser scanning as compared with volume derived from analog measurements. A set of 10 laser scan datasets were collected for each of 36 vines from which volume was calculated using combinations of two, three, four, six and 10 scans. Likewise, analog volume measurements were made by submerging the vine trunks and cordons in water and capturing the displaced water. A regression analysis examined the relationship between digital and non-digital techniques among the 36 vines and found that the standard error drops rapidly as additional scans are added to the volume calculation process and stabilizes at the four-view geometry with an average Pearson's product moment correlation coefficient of 0.93. Estimates of digital volumes are systematically greater than those of analog volumes and can be explained by the manner in which each technique interacts with the vine tissue. This laser scanning technique yields a highly linear relationship between vine volume and tissue mass revealing a new, rapid and non-destructive method to remotely measure standing biomass. This application shows promise for use in other ecosystems such as orchards and forests. ?? 2010 Elsevier B.V.

  20. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  1. SU-E-T-624: Quantitative Evaluation of 2D Versus 3D Dosimetry for Stereotactic Volumetric Modulated Arc Delivery Using COMPASS

    SciTech Connect

    Vikraman, S; Karrthick, K; Rajesh, T; Sambasivaselli, R; Senniandanvar, V; Kataria, T; Manigandan, D; Karthikeyan, N; Muthukumaran, M

    2014-06-15

    Purpose: The purpose of this study was to evaluate quantitatively 2D versus 3D dosimetry for stereotactic volumetric modulated arc delivery using COMPASS with 2D array. Methods: Twenty-five patients CT images and RT structures of different sites like brain, head and neck, thorax, abdomen and spine were taken from Multiplan planning system for this study. All these patients underwent radical stereotactic treatment in Cyberknife. For each patient, linac based VMAT stereotactic plans were generated in Monaco TPS v 3.1 using Elekta Beam Modulator MLC. Dose prescription was in the range of 5-20Gy/fraction.TPS calculated VMAT plan delivery accuracy was quantitatively evaluated with COMPASS measured dose and calculated dose based on DVH metrics. In order to ascertain the potential of COMPASS 3D dosimetry for stereotactic plan delivery, 2D fluence verification was performed with MatriXX using Multicube. Results: For each site, D{sub 9} {sub 5} was achieved with 100% of prescription dose with maximum 0.05SD. Conformity index (CI) was observed closer to 1.15 in all cases. Maximum deviation of 2.62 % was observed for D{sub 9} {sub 5} when compared TPS versus COMPASS measured. Considerable deviations were observed in head and neck cases compare to other sites. The maximum mean and standard deviation for D{sub 9} {sub 5}, average target dose and average gamma were -0.78±1.72, -1.10±1.373 and 0.39±0.086 respectively. Numbers of pixels passing 2D fluence verification were observed as a mean of 99.36% ±0.455 SD with 3% dose difference and 3mm DTA. For critical organs in head and neck cases, significant dose differences were observed in 3D dosimetry while the target doses were matched well within limit in both 2D and 3D dosimetry. Conclusion: The quantitative evaluations of 2D versus 3D dosimetry for stereotactic volumetric modulated plans showed the potential of highlighting the delivery errors. This study reveals that COMPASS 3D dosimetry is an effective tool for patient

  2. Deep Learning Segmentation of Optical Microscopy Images Improves 3D Neuron Reconstruction.

    PubMed

    Li, Rongjian; Zeng, Tao; Peng, Hanchuan; Ji, Shuiwang

    2017-03-08

    Digital reconstruction, or tracing, of 3-dimensional (3D) neuron structure from microscopy images is a critical step toward reversing engineering the wiring and anatomy of a brain. Despite a number of prior attempts, this task remains very challenging, especially when images are contaminated by noises or have discontinued segments of neurite patterns. An approach for addressing such problems is to identify the locations of neuronal voxels using image segmentation methods prior to applying tracing or reconstruction techniques. This preprocessing step is expected to remove noises in the data, thereby leading to improved reconstruction results. In this work, we proposed to use 3D Convolutional neural networks (CNNs) for segmenting the neuronal microscopy images. Specifically, we designed a novel CNN architecture that takes volumetric images as the inputs and their voxel-wise segmentation maps as the outputs. The developed architecture allows us to train and predict using large microscopy images in an end-to-end manner. We evaluated the performance of our model on a variety of challenging 3D microscopy images from different organisms. Results showed that the proposed methods improved the tracing performance significantly when combined with different reconstruction algorithms.

  3. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  4. 3D EFT imaging with planar electrode array: Numerical simulation

    NASA Astrophysics Data System (ADS)

    Tuykin, T.; Korjenevsky, A.

    2010-04-01

    Electric field tomography (EFT) is the new modality of the quasistatic electromagnetic sounding of conductive media recently investigated theoretically and realized experimentally. The demonstrated results pertain to 2D imaging with circular or linear arrays of electrodes (and the linear array provides quite poor quality of imaging). In many applications 3D imaging is essential or can increase value of the investigation significantly. In this report we present the first results of numerical simulation of the EFT imaging system with planar array of electrodes which allows 3D visualization of the subsurface conductivity distribution. The geometry of the system is similar to the geometry of our EIT breast imaging system providing 3D conductivity imaging in form of cross-sections set with different depth from the surface. The EFT principle of operation and reconstruction approach differs from the EIT system significantly. So the results of numerical simulation are important to estimate if comparable quality of imaging is possible with the new contactless method. The EFT forward problem is solved using finite difference time domain (FDTD) method for the 8×8 square electrodes array. The calculated results of measurements are used then to reconstruct conductivity distributions by the filtered backprojections along electric field lines. The reconstructed images of the simple test objects are presented.

  5. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  6. Efficient physics-based predictive 3D image modeling and simulation of optical atmospheric refraction phenomena

    NASA Astrophysics Data System (ADS)

    Reinhardt, Colin N.; Hammel, Stephen M.; Tsintikidis, Dimitris

    2016-09-01

    We present some preliminary results and discussion of our ongoing effort to develop a prototype volumetric atmospheric optical refraction simulator which uses 3D nonlinear ray-tracing and state-of-art physics-based rendering techniques. The tool will allow simulation of optical curved-ray propagation through nonlinear refractivity gradient profiles in volumetric atmospheric participating media, and the generation of radiometrically accurate images of the resulting atmospheric refraction phenomena, including inferior and superior mirages, over-the-horizon viewing conditions, looming and sinking, towering and stooping of distant objects. The ability to accurately model and predict atmospheric optical refraction conditions and phenomena is important in both defense and commercial applications. Our nonlinear refractive ray-trace method is currently CPU-parallelized and is well-suited for GPU compute implementation.

  7. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  8. A miniature real-time volumetric ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Wygant, Ira O.; Yeh, David T.; Zhuang, Xuefeng; Nikoozadeh, Amin; Oralkan, Omer; Ergun, Arif S.; Karaman, Mustafa; Khuri-Yakub, Butrus T.

    2005-04-01

    Progress made in the development of a miniature real-time volumetric ultrasound imaging system is presented. This system is targeted for use in a 5-mm endoscopic channel and will provide real-time, 30-mm deep, volumetric images. It is being developed as a clinically useful device, to demonstrate a means of integrating the front-end electronics with the transducer array, and to demonstrate the advantages of the capacitive micromachined ultrasonic transducer (CMUT) technology for medical imaging. Presented here is the progress made towards the initial implementation of this system, which is based on a two-dimensional, 16x16 CMUT array. Each CMUT element is 250 um by 250 um and has a 5 MHz center frequency. The elements are connected to bond pads on the back side of the array with 400-um long through-wafer interconnects. The transducer array is flip-chip bonded to a custom-designed integrated circuit that comprises the front-end electronics. The result is that each transducer element is connected to a dedicated pulser and low-noise preamplifier. The pulser generates 25-V, 100-ns wide, unipolar pulses. The preamplifier has an approximate transimpedance gain of 500 kOhm and 3-dB bandwidth of 10 MHz. In the first implementation of the system, one element at a time can be selected for transmit and receive and thus synthetic aperture images can be generated. In future implementations, 16 channels will be active at a given time. These channels will connect to an FPGA-based data acquisition system for real-time image reconstruction.

  9. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  10. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery.

    PubMed

    Ketcha, M D; De Silva, T; Uneri, A; Kleinszig, G; Vogt, S; Wolinsky, J-P; Siewerdsen, J H

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  11. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  12. HPIV based volumetric 3D flow description in the roughness sublayer of a turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Talapatra, Siddharth; Katz, Joseph

    2011-11-01

    Microscopic HPIV is utilized to resolve the 3D flow in the roughness sublayer of a boundary layer over a rough wall at Reτ=3400, consisting of pyramidal elements with height of k=0.45mm and 3.3mm wavelength. Typically, ~7000 particles are tracked in a 3.2 ×2.1 ×1.8mm3 volume, whose wall-normal extent is -0.2 < y / k < 4.67, y=0 being the roughness peak. These measurements are facilitated by matching the refractive index of the fluid with that of the acrylic rough wall. Results show that the sublayer is flooded by complex coherent structures scaled between 1-2 k. They are mostly aligned with roughness grooves, but some wrap around the pyramids, and stretch to a streamwise orientation by a relatively fast channeling flow that develops between the pyramid ridgelines. Occasionally, structures eject away from the roughness sublayer at a steep angle to the mean flow. Using the 300 realizations processed so far, the spatial variations in mean velocity and Reynolds stresses are compared to 2D PIV results, and trends generally (but not always) agree. In particular, there is a rapid increase in all Reynolds stress components close y=0. Conditional sampling is used to extract statistically significant structures. Sponsored by ONR (grant No. 000140-91-10-0-7).

  13. Added Value of 3D Proton-Density Weighted Images in Diagnosis of Intracranial Arterial Dissection

    PubMed Central

    Kim, Jin Woo; Kim, Young Dae; Lee, Seung-Koo; Lim, Soo Mee; Oh, Se Won

    2016-01-01

    Background An early and reliable diagnosis of intracranial arterial dissection is important to reduce the risk of neurological complication. The purpose of this study was to assess the clinical usefulness of three-dimensional high-resolution MRI (3D-HR-MRI) including pre- and post-contrast T1-weighted volumetric isotropic turbo spin echo acquisition with improved motion-sensitized driven equilibrium preparation (3D-iMSDE-T1) and proton-density weighted image (3D-PD) in detecting dissection and to evaluate the added value of 3D-PD in diagnosing intracranial arterial dissection. Methods We retrospectively recruited patients who underwent 3D-HR-MRI with clinical suspicion of arterial dissection. Among them, we selected patients who were diagnosed with definite dissection according to the Spontaneous Cervicocephalic Arterial Dissections Study criteria. For each patient, the presence of intimal flap, intramural hematoma, and vessel dilatation were evaluated independently by two neuroradiologists on each sequence. Interobserver agreement was assessed. Results Seventeen patients (mean age: 41 ± 10 [SD] years; 13 men) were diagnosed with definite dissection. The intimal flaps were more frequently detected on 3D-PD (88.2%, 15/17) than on 3D-iMSDE-T1 (29.4%, 5/17), and post-contrast 3D-iMSDE-T1 (35.3%, 6/17; P = 0.006 and P = 0.004, respectively). No significant difference was found in the detection rate of intramural hematomas (59–71%) and vascular dilatations (47%) on each sequence. Interobserver agreement for detection of dissection findings showed almost perfect agreement (k = 0.84–1.00), except for detection of intimal flaps on pre-contrast 3D-iMSDE-T1 (k = 0.62). After addition of 3D-PD to pre- and post-contrast 3D-iMSDE-T1, more patients were diagnosed with definite dissection with the initial MRI (88.2% vs. 47.1%; P = 0.039). Conclusions The intimal flap might be better visualized on the 3D-PD sequence than the 3D-iMSDE-T1 sequences, allowing diagnosis of

  14. Development of the 3D volumetric micro-CT scanner for preclinical animals

    NASA Astrophysics Data System (ADS)

    Kim, Kyong-Woo; Kim, Kyu-Gyeom; Kim, Jae-Hee; Min, Jong-Hwan; Lee, Hee-Sin; Lee, Joonwhoan

    2011-06-01

    A high resolution micro computed tomography (micro-CT) system for live small animal imaging has been developed. The system consists of an x-ray source with micro focus spot and high brightness, rotating gantry with a x-ray tube and flat panel detector pair and a stationary and a horizontally positioned small animal bed to achieve a conebeam mode scan. The system is optimized for in vivo small animal imaging and the capability of administering respiratory anesthesia during scanning. The Feldkamp algorithm was adopted in image reconstruction with graphic processing unit (GPU). We evaluated the spatial resolution, image contrast, and uniformity of system using phantom. As the result, the spatial resolution of the system was the 56lp/mm at 10% of the MTF curve, and the radiation dose to the sample was 98mGy. The minimal resolving contrast was found to be less than 46 CT numbers on low-contrast phantom. We present the image test results of the bone and lung, and heart of the live mice. [Figure not available: see fulltext.

  15. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  16. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  17. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  18. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  19. Automated localization of implanted seeds in 3D TRUS images used for prostate brachytherapy

    SciTech Connect

    Wei Zhouping; Gardi, Lori; Downey, Donal B.; Fenster, Aaron

    2006-07-15

    An algorithm has been developed in this paper to localize implanted radioactive seeds in 3D ultrasound images for a dynamic intraoperative brachytherapy procedure. Segmentation of the seeds is difficult, due to their small size in relatively low quality of transrectal ultrasound (TRUS) images. In this paper, intraoperative seed segmentation in 3D TRUS images is achieved by performing a subtraction of the image before the needle has been inserted, and the image after the seeds have been implanted. The seeds are searched in a 'local' space determined by the needle position and orientation information, which are obtained from a needle segmentation algorithm. To test this approach, 3D TRUS images of the agar and chicken tissue phantoms were obtained. Within these phantoms, dummy seeds were implanted. The seed locations determined by the seed segmentation algorithm were compared with those obtained from a volumetric cone-beam flat-panel micro-CT scanner and human observers. Evaluation of the algorithm showed that the rms error in determining the seed locations using the seed segmentation algorithm was 0.98 mm in agar phantoms and 1.02 mm in chicken phantoms.

  20. Ultra wide band millimeter wave holographic ``3-D`` imaging of concealed targets on mannequins

    SciTech Connect

    Collins, H.D.; Hall, T.E.; Gribble, R.P.

    1994-08-01

    Ultra wide band (chirp frequency) millimeter wave ``3-D`` holography is a unique technique for imaging concealed targets on human subjects with extremely high lateral and depth resolution. Recent ``3-D`` holographic images of full size mannequins with concealed weapons illustrate the efficacy of this technique for airport security. A chirp frequency (24 GHz to 40 GHz) holographic system was used to construct extremely high resolution images (optical quality) using polyrod antenna in a bi-static configuration using an x-y scanner. Millimeter wave chirp frequency holography can be simply described as a multi-frequency detection and imaging technique where the target`s reflected signals are decomposed into discrete frequency holograms and reconstructed into a single composite ``3-D`` image. The implementation of this technology for security at airports, government installations, etc., will require real-time (video rate) data acquisition and computer image reconstruction of large volumetric data sets. This implies rapid scanning techniques or large, complex ``2-D`` arrays and high speed computing for successful commercialization of this technology.

  1. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  2. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  3. X-ray volumetric imaging in image-guided radiotherapy: The new standard in on-treatment imaging

    SciTech Connect

    McBain, Catherine A.; Henry, Ann M. . E-mail: catherine.mcbain@christie-tr.nwest.nhs.uk; Sykes, Jonathan; Amer, Ali; Marchant, Tom; Moore, Christopher M.; Davies, Julie; Stratford, Julia; McCarthy, Claire; Porritt, Bridget; Williams, Peter; Khoo, Vincent S.; Price, Pat

    2006-02-01

    Purpose: X-ray volumetric imaging (XVI) for the first time allows for the on-treatment acquisition of three-dimensional (3D) kV cone beam computed tomography (CT) images. Clinical imaging using the Synergy System (Elekta, Crawley, UK) commenced in July 2003. This study evaluated image quality and dose delivered and assessed clinical utility for treatment verification at a range of anatomic sites. Methods and Materials: Single XVIs were acquired from 30 patients undergoing radiotherapy for tumors at 10 different anatomic sites. Patients were imaged in their setup position. Radiation doses received were measured using TLDs on the skin surface. The utility of XVI in verifying target volume coverage was qualitatively assessed by experienced clinicians. Results: X-ray volumetric imaging acquisition was completed in the treatment position at all anatomic sites. At sites where a full gantry rotation was not possible, XVIs were reconstructed from projection images acquired from partial rotations. Soft-tissue definition of organ boundaries allowed direct assessment of 3D target volume coverage at all sites. Individual image quality depended on both imaging parameters and patient characteristics. Radiation dose ranged from 0.003 Gy in the head to 0.03 Gy in the pelvis. Conclusions: On-treatment XVI provided 3D verification images with soft-tissue definition at all anatomic sites at acceptably low radiation doses. This technology sets a new standard in treatment verification and will facilitate novel adaptive radiotherapy techniques.

  4. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  5. Wound Measurement Techniques: Comparing the Use of Ruler Method, 2D Imaging and 3D Scanner.

    PubMed

    Shah, Aj; Wollak, C; Shah, J B

    2013-12-01

    The statistics on the growing number of non-healing wounds is alarming. In the United States, chronic wounds affect 6.5 million patients. An estimated US $25 billion is spent annually on treatment of chronic wounds and the burden is rapidly growing due to increasing health care costs, an aging population and a sharp rise in the incidence of diabetes and obesity worldwide.(1) Accurate wound measurement techniques will help health care personnel to monitor the wounds which will indirectly help improving care.(7,9) The clinical practice of measuring wounds has not improved even today.(2,3) A common method like the ruler method to measure wounds has poor interrater and intrarater reliability.(2,3) Measuring the greatest length by the greatest width perpendicular to the greatest length, the perpendicular method, is more valid and reliable than other ruler based methods.(2) Another common method like acetate tracing is more accurate than the ruler method but still has its disadvantages. These common measurement techniques are time consuming with variable inaccuracies. In this study, volumetric measurements taken with a non-contact 3-D scanner are benchmarked against the common ruler method, acetate grid tracing, and 2-D image planimetry volumetric measurement technique. A liquid volumetric fill method is used as the control volume. Results support the hypothesis that the 3-D scanner consistently shows accurate volumetric measurements in comparison to standard volumetric measurements obtained by the waterfill technique (average difference of 11%). The 3-D scanner measurement technique was found more reliable and valid compared to other three techniques, the ruler method (average difference of 75%), acetate grid tracing (average difference of 41%), and 2D planimetric measurements (average difference of 52%). Acetate tracing showed more accurate measurements compared to the ruler method (average difference of 41% (acetate tracing) compared to 75% (ruler method)). Improving

  6. Wound Measurement Techniques: Comparing the Use of Ruler Method, 2D Imaging and 3D Scanner

    PubMed Central

    Shah, Aj; Wollak, C.; Shah, J.B.

    2015-01-01

    The statistics on the growing number of non-healing wounds is alarming. In the United States, chronic wounds affect 6.5 million patients. An estimated US $25 billion is spent annually on treatment of chronic wounds and the burden is rapidly growing due to increasing health care costs, an aging population and a sharp rise in the incidence of diabetes and obesity worldwide.1 Accurate wound measurement techniques will help health care personnel to monitor the wounds which will indirectly help improving care.7,9 The clinical practice of measuring wounds has not improved even today.2,3 A common method like the ruler method to measure wounds has poor interrater and intrarater reliability.2,3 Measuring the greatest length by the greatest width perpendicular to the greatest length, the perpendicular method, is more valid and reliable than other ruler based methods.2 Another common method like acetate tracing is more accurate than the ruler method but still has its disadvantages. These common measurement techniques are time consuming with variable inaccuracies. In this study, volumetric measurements taken with a non-contact 3-D scanner are benchmarked against the common ruler method, acetate grid tracing, and 2-D image planimetry volumetric measurement technique. A liquid volumetric fill method is used as the control volume. Results support the hypothesis that the 3-D scanner consistently shows accurate volumetric measurements in comparison to standard volumetric measurements obtained by the waterfill technique (average difference of 11%). The 3-D scanner measurement technique was found more reliable and valid compared to other three techniques, the ruler method (average difference of 75%), acetate grid tracing (average difference of 41%), and 2D planimetric measurements (average difference of 52%). Acetate tracing showed more accurate measurements compared to the ruler method (average difference of 41% (acetate tracing) compared to 75% (ruler method)). Improving the

  7. Noninvasive computational imaging of cardiac electrophysiology for 3-D infarct.

    PubMed

    Wang, Linwei; Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng

    2011-04-01

    Myocardial infarction (MI) creates electrophysiologically altered substrates that are responsible for ventricular arrhythmias, such as tachycardia and fibrillation. The presence, size, location, and composition of infarct scar bear significant prognostic and therapeutic implications for individual subjects. We have developed a statistical physiological model-constrained framework that uses noninvasive body-surface-potential data and tomographic images to estimate subject-specific transmembrane-potential (TMP) dynamics inside the 3-D myocardium. In this paper, we adapt this framework for the purpose of noninvasive imaging, detection, and quantification of 3-D scar mass for postMI patients: the framework requires no prior knowledge of MI and converges to final subject-specific TMP estimates after several passes of estimation with intermediate feedback; based on the primary features of the estimated spatiotemporal TMP dynamics, we provide 3-D imaging of scar tissue and quantitative evaluation of scar location and extent. Phantom experiments were performed on a computational model of realistic heart-torso geometry, considering 87 transmural infarct scars of different sizes and locations inside the myocardium, and 12 compact infarct scars (extent between 10% and 30%) at different transmural depths. Real-data experiments were carried out on BSP and magnetic resonance imaging (MRI) data from four postMI patients, validated by gold standards and existing results. This framework shows unique advantage of noninvasive, quantitative, computational imaging of subject-specific TMP dynamics and infarct mass of the 3-D myocardium, with the potential to reflect details in the spatial structure and tissue composition/heterogeneity of 3-D infarct scar.

  8. Refraction Correction in 3D Transcranial Ultrasound Imaging

    PubMed Central

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  9. 3D Imaging of Density Gradients Using Plenoptic BOS

    NASA Astrophysics Data System (ADS)

    Klemkowsky, Jenna; Clifford, Chris; Fahringer, Timothy; Thurow, Brian

    2016-11-01

    The combination of background oriented schlieren (BOS) and a plenoptic camera, termed Plenoptic BOS, is explored through two proof-of-concept experiments. The motivation of this work is to provide a 3D technique capable of observing density disturbances. BOS uses the relationship between density and refractive index gradients to observe an apparent shift in a patterned background through image comparison. Conventional BOS systems acquire a single line-of-sight measurement, and require complex configurations to obtain 3D measurements, which are not always conducive to experimental facilities. Plenoptic BOS exploits the plenoptic camera's ability to generate multiple perspective views and refocused images from a single raw plenoptic image during post processing. Using such capabilities, with regards to BOS, provides multiple line-of-sight measurements of density disturbances, which can be collectively used to generate refocused BOS images. Such refocused images allow the position of density disturbances to be qualitatively and quantitatively determined. The image that provides the sharpest density gradient signature corresponds to a specific depth. These results offer motivation to advance Plenoptic BOS with an ultimate goal of reconstructing a 3D density field.

  10. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  11. A 5-MHz cylindrical dual-layer transducer array for 3-D transrectal ultrasound imaging.

    PubMed

    Chen, Yuling; Nguyen, Man; Yen, Jesse T

    2012-07-01

    Two-dimensional transrectal ultrasound (TRUS) is being used in guiding prostate biopsies and treatments. In many cases, the TRUS probes are moved manually or mechanically to acquire volumetric information, making the imaging slow, user dependent, and unreliable. A real-time three-dimensional (3-D) TRUS system could improve reliability and volume rates of imaging during these procedures. In this article, the authors present a 5-MHz cylindrical dual-layer transducer array capable of real-time 3-D transrectal ultrasound without any mechanically moving parts. Compared with fully sampled 2-D arrays, this design substantially reduces the channel count and fabrication complexity. This dual-layer transducer uses PZT elements for transmit and P[VDF-TrFE] copolymer elements for receive, respectively. The mechanical flexibility of both diced PZT and copolymer makes it practical for transrectal applications. Full synthetic aperture 3-D data sets were acquired by interfacing the transducer with a Verasonics Data Acquisition System. Offline 3-D beamforming was then performed to obtain volumes of two wire phantoms and a cyst phantom. Generalized coherence factor was applied to improve the contrast of images. The measured -6-dB fractional bandwidth of the transducer was 62% with a center frequency of 5.66 MHz. The measured lateral beamwidths were 1.28 mm and 0.91 mm in transverse and longitudinal directions, respectively, compared with a simulated beamwidth of 0.92 mm and 0.74 mm.

  12. An improved 3-D Look--Locker imaging method for T(1) parameter estimation.

    PubMed

    Nkongchu, Ken; Santyr, Giles

    2005-09-01

    The 3-D Look-Locker (LL) imaging method has been shown to be a highly efficient and accurate method for the volumetric mapping of the spin lattice relaxation time T(1). However, conventional 3-D LL imaging schemes are typically limited to small tip angle RF pulses (3-D LL imaging method that incorporates an additional and variable delay time between recovery samples is described, which permits the use of larger tip angles (>5 degrees ), thereby improving the SNR and the accuracy of the method. In phantom studies, a mean T(1) measurement accuracy of less than 2% (0.2-3.1%) using a tip angle of 10 degrees was obtained for a range of T(1) from approximately 300 to 1,700 ms with a measurement time increase of only 15%. This accuracy compares favorably with the conventional 3-D LL method that provided an accuracy between 2.2% and 7.3% using a 5 degrees flip angle.

  13. Improved Second-Generation 3-D Volumetric Display System. Revision 2

    DTIC Science & Technology

    1998-10-01

    2 mm 2 Watt The factor of 0.7 is used here to account for the 5 14-nm laser wavelength instead of the 555-nm peak of the photopic curve . For a spot...lasers over a 40-minute time period. The spikes in the curves are due to a defective power meter and are not real. The Coherent had virtually single...visible three-dimensional images. A primary element in the helical display system is a rotating helically curved screen, referred to as the "helix

  14. 3D-DXA: Assessing the Femoral Shape, the Trabecular Macrostructure and the Cortex in 3D from DXA images.

    PubMed

    Humbert, Ludovic; Martelli, Yves; Fonolla, Roger; Steghofer, Martin; Di Gregorio, Silvana; Malouf, Jorge; Romera, Jordi; Barquero, Luis Miguel Del Rio

    2017-01-01

    The 3D distribution of the cortical and trabecular bone mass in the proximal femur is a critical component in determining fracture resistance that is not taken into account in clinical routine Dual-energy X-ray Absorptiometry (DXA) examination. In this paper, a statistical shape and appearance model together with a 3D-2D registration approach are used to model the femoral shape and bone density distribution in 3D from an anteroposterior DXA projection. A model-based algorithm is subsequently used to segment the cortex and build a 3D map of the cortical thickness and density. Measurements characterising the geometry and density distribution were computed for various regions of interest in both cortical and trabecular compartments. Models and measurements provided by the "3D-DXA" software algorithm were evaluated using a database of 157 study subjects, by comparing 3D-DXA analyses (using DXA scanners from three manufacturers) with measurements performed by Quantitative Computed Tomography (QCT). The mean point-to-surface distance between 3D-DXA and QCT femoral shapes was 0.93 mm. The mean absolute error between cortical thickness and density estimates measured by 3D-DXA and QCT was 0.33 mm and 72 mg/cm(3). Correlation coefficients (R) between the 3D-DXA and QCT measurements were 0.86, 0.93, and 0.95 for the volumetric bone mineral density at the trabecular, cortical, and integral compartments respectively, and 0.91 for the mean cortical thickness. 3D-DXA provides a detailed analysis of the proximal femur, including a separate assessment of the cortical layer and trabecular macrostructure, which could potentially improve osteoporosis management while maintaining DXA as the standard routine modality.

  15. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  16. 4D ultrafast ultrasound flow imaging: in vivo quantification of arterial volumetric flow rate in a single heartbeat

    NASA Astrophysics Data System (ADS)

    Correia, Mafalda; Provost, Jean; Tanter, Mickael; Pernot, Mathieu

    2016-12-01

    We present herein 4D ultrafast ultrasound flow imaging, a novel ultrasound-based volumetric imaging technique for the quantitative mapping of blood flow. Complete volumetric blood flow distribution imaging was achieved through 2D tilted plane-wave insonification, 2D multi-angle cross-beam beamforming, and 3D vector Doppler velocity components estimation by least-squares fitting. 4D ultrafast ultrasound flow imaging was performed in large volumetric fields of view at very high volume rate (>4000 volumes s-1) using a 1024-channel 4D ultrafast ultrasound scanner and a 2D matrix-array transducer. The precision of the technique was evaluated in vitro by using 3D velocity vector maps to estimate volumetric flow rates in a vessel phantom. Volumetric Flow rate errors of less than 5% were found when volumetric flow rates and peak velocities were respectively less than 360 ml min-1 and 100 cm s-1. The average volumetric flow rate error increased to 18.3% when volumetric flow rates and peak velocities were up to 490 ml min-1 and 1.3 m s-1, respectively. The in vivo feasibility of the technique was shown in the carotid arteries of two healthy volunteers. The 3D blood flow velocity distribution was assessed during one cardiac cycle in a full volume and it was used to quantify volumetric flow rates (375  ±  57 ml min-1 and 275  ±  43 ml min-1). Finally, the formation of 3D vortices at the carotid artery bifurcation was imaged at high volume rates.

  17. 4D ultrafast ultrasound flow imaging: in vivo quantification of arterial volumetric flow rate in a single heartbeat.

    PubMed

    Correia, Mafalda; Provost, Jean; Tanter, Mickael; Pernot, Mathieu

    2016-12-07

    We present herein 4D ultrafast ultrasound flow imaging, a novel ultrasound-based volumetric imaging technique for the quantitative mapping of blood flow. Complete volumetric blood flow distribution imaging was achieved through 2D tilted plane-wave insonification, 2D multi-angle cross-beam beamforming, and 3D vector Doppler velocity components estimation by least-squares fitting. 4D ultrafast ultrasound flow imaging was performed in large volumetric fields of view at very high volume rate (>4000 volumes s(-1)) using a 1024-channel 4D ultrafast ultrasound scanner and a 2D matrix-array transducer. The precision of the technique was evaluated in vitro by using 3D velocity vector maps to estimate volumetric flow rates in a vessel phantom. Volumetric Flow rate errors of less than 5% were found when volumetric flow rates and peak velocities were respectively less than 360 ml min(-1) and 100 cm s(-1). The average volumetric flow rate error increased to 18.3% when volumetric flow rates and peak velocities were up to 490 ml min(-1) and 1.3 m s(-1), respectively. The in vivo feasibility of the technique was shown in the carotid arteries of two healthy volunteers. The 3D blood flow velocity distribution was assessed during one cardiac cycle in a full volume and it was used to quantify volumetric flow rates (375  ±  57 ml min(-1) and 275  ±  43 ml min(-1)). Finally, the formation of 3D vortices at the carotid artery bifurcation was imaged at high volume rates.

  18. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  19. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  20. 3-D volumetric computed tomographic scoring as an objective outcome measure for chronic rhinosinusitis: Clinical correlations and comparison to Lund-Mackay scoring

    PubMed Central

    Pallanch, John; Yu, Lifeng; Delone, David; Robb, Rich; Holmes, David R.; Camp, Jon; Edwards, Phil; McCollough, Cynthia H.; Ponikau, Jens; Dearking, Amy; Lane, John; Primak, Andrew; Shinkle, Aaron; Hagan, John; Frigas, Evangelo; Ocel, Joseph J.; Tombers, Nicole; Siwani, Rizwan; Orme, Nicholas; Reed, Kurtis; Jerath, Nivedita; Dhillon, Robinder; Kita, Hirohito

    2014-01-01

    Background We aimed to test the hypothesis that 3-D volume-based scoring of computed tomographic (CT) images of the paranasal sinuses was superior to Lund-Mackay CT scoring of disease severity in chronic rhinosinusitis (CRS). We determined correlation between changes in CT scores (using each scoring system) with changes in other measures of disease severity (symptoms, endoscopic scoring, and quality of life) in patients with CRS treated with triamcinolone. Methods The study group comprised 48 adult subjects with CRS. Baseline symptoms and quality of life were assessed. Endoscopy and CT scans were performed. Patients received a single systemic dose of intramuscular triamcinolone and were reevaluated 1 month later. Strengths of the correlations between changes in CT scores and changes in CRS signs and symptoms and quality of life were determined. Results We observed some variability in degree of improvement for the different symptom, endoscopic, and quality-of-life parameters after treatment. Improvement of parameters was significantly correlated with improvement in CT disease score using both CT scoring methods. However, volumetric CT scoring had greater correlation with these parameters than Lund-Mackay scoring. Conclusion Volumetric scoring exhibited higher degree of correlation than Lund-Mackay scoring when comparing improvement in CT score with improvement in score for symptoms, endoscopic exam, and quality of life in this group of patients who received beneficial medical treatment for CRS. PMID:24106202

  1. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  2. 3D imaging of the mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Faivre, Michael; Moreels, Guy; Clairemidi, Jacques; Mougin-Sisini, Davy; Meriwether, John W.; Lehmacher, Gerald A.; Vidal, Erick; Veliz, Oskar

    A new and original stereo-imaging method is introduced to measure the altitude of the OH airglow layer and provide a 3D map of the altitude of the layer centroid. Near-IR photographs of the layer are taken at two sites distant of 645 km. Each photograph is processed in order to invert the perspective effect and provide a satellite-type view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized crosscorrelation coefficient. This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12° 09' 08.2" S, 75° 33' 49.3" W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16° 33' 17.6" S, 71° 39' 59.4" W, altitude 2330 m) close to Arequipa. 3D maps of the layer surface are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 87.1 km on July 26 and 89.5 km on July 28. Comparable relief wavy features appear in the 3D and intensity maps.

  3. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  4. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs.

  5. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  6. 3-D airborne ultrasound synthetic aperture imaging based on capacitive micromachined ultrasonic transducers.

    PubMed

    Park, Kwan Kyu; Khuri-Yakub, Butrus T

    2013-09-01

    In this paper, we present an airborne 3-D volumetric imaging system based on capacitive micromachined ultrasonic transducers (CMUTs). For this purpose we fabricated 89-kHz CMUTs where each CMUT is made of a circular single-crystal silicon plate with a radius of 1mm and a thickness of 20 μm, which is actuated by electrostatic force through a 20-μm vacuum gap. The measured transmit sensitivity at 300-V DC bias is 14.6 Pa/V and 24.2 Pa/V, when excited by a 30-cycle burst and a continuous wave, respectively. The measured receive sensitivity at 300-V DC bias is 16.6 mV/Pa (-35.6 dB re 1 V/Pa) for a 30-cycle burst. A 26×26 2-D array was implemented by mechanical scanning a co-located transmitter and receiver using the classic synthetic aperture (CSA) method. The measurement of a 1.6λ-size target at a distance of 500 mm presented a lateral resolution of 3.17° and also showed good agreement with the theoretical point spread function. The 3-D imaging of two plates at a distance of 350 mm and 400 mm was constructed to exhibit the capability of the imaging system. This study experimentally demonstrates that a 2-D CMUT array can be used for practical 3-D imaging applications in air, such as a human-machine interface.

  7. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  8. Validation of 3D ultrasound: CT registration of prostate images

    NASA Astrophysics Data System (ADS)

    Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

    2003-05-01

    All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

  9. 3D printed biomimetic vascular phantoms for assessment of hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Jianting; Ghassemi, Pejhman; Melchiorri, Anthony; Ramella-Roman, Jessica; Mathews, Scott A.; Coburn, James; Sorg, Brian; Chen, Yu; Pfefer, Joshua

    2015-03-01

    The emerging technique of three-dimensional (3D) printing provides a revolutionary way to fabricate objects with biologically realistic geometries. Previously we have performed optical and morphological characterization of basic 3D printed tissue-simulating phantoms and found them suitable for use in evaluating biophotonic imaging systems. In this study we assess the potential for printing phantoms with irregular, image-defined vascular networks that can be used to provide clinically-relevant insights into device performance. A previously acquired fundus camera image of the human retina was segmented, embedded into a 3D matrix, edited to incorporate the tubular shape of vessels and converted into a digital format suitable for printing. A polymer with biologically realistic optical properties was identified by spectrophotometer measurements of several commercially available samples. Phantoms were printed with the retinal vascular network reproduced as ~1.0 mm diameter channels at a range of depths up to ~3 mm. The morphology of the printed vessels was verified by volumetric imaging with μ-CT. Channels were filled with hemoglobin solutions at controlled oxygenation levels, and the phantoms were imaged by a near-infrared hyperspectral reflectance imaging system. The effect of vessel depth on hemoglobin saturation estimates was studied. Additionally, a phantom incorporating the vascular network at two depths was printed and filled with hemoglobin solution at two different saturation levels. Overall, results indicated that 3D printed phantoms are useful for assessing biophotonic system performance and have the potential to form the basis of clinically-relevant standardized test methods for assessment of medical imaging modalities.

  10. Feasibility of Using Volumetric Contrast-Enhanced Ultrasound with a 3-D Transducer to Evaluate Therapeutic Response after Targeted Therapy in Rabbit Hepatic VX2 Carcinoma.

    PubMed

    Kim, Jeehyun; Kim, Jung Hoon; Yoon, Soon Ho; Choi, Won Seok; Kim, Young Jae; Han, Joon Koo; Choi, Byung-Ihn

    2015-12-01

    The aim of this study was to assess the feasibility of using dynamic contrast-enhanced ultrasound (DCE-US) with a 3-D transducer to evaluate therapeutic responses to targeted therapy. Rabbits with hepatic VX2 carcinomas, divided into a treatment group (n = 22, 30 mg/kg/d sorafenib) and a control group (n = 13), were evaluated with DCE-US using 2-D and 3-D transducers and computed tomography (CT) perfusion imaging at baseline and 1 d after the first treatment. Perfusion parameters were collected, and correlations between parameters were analyzed. In the treatment group, both volumetric and 2-D DCE-US perfusion parameters, including peak intensity (33.2 ± 19.9 vs. 16.6 ± 10.7, 63.7 ± 20.0 vs. 30.1 ± 19.8), slope (15.3 ± 12.4 vs. 5.7 ± 4.5, 37.3 ± 20.4 vs. 15.7 ± 13.0) and area under the curve (AUC; 1004.1 ± 560.3 vs. 611.4 ± 421.1, 1332.2 ± 708.3 vs. 670.4 ± 388.3), had significantly decreased 1 d after the first treatment (p = 0.00). In the control group, 2-D DCE-US revealed that peak intensity, time to peak and slope had significantly changed (p < 0.05); however, volumetric DCE-US revealed that peak intensity, time-intensity AUC, AUC during wash-in and AUC during wash-out had significantly changed (p = 0.00). CT perfusion imaging parameters, including blood flow, blood volume and permeability of the capillary vessel surface, had significantly decreased in the treatment group (p = 0.00); however, in the control group, peak intensity and blood volume had significantly increased (p = 0.00). It is feasible to use DCE-US with a 3-D transducer to predict early therapeutic response after targeted therapy because perfusion parameters, including peak intensity, slope and AUC, significantly decreased, which is similar to the trend observed for 2-D DCE-US and CT perfusion imaging parameters.

  11. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes.

  12. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  13. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  14. Volumetric LiDAR scanning of a wind turbine wake and comparison with a 3D analytical wake model

    NASA Astrophysics Data System (ADS)

    Carbajo Fuertes, Fernando; Porté-Agel, Fernando

    2016-04-01

    A correct estimation of the future power production is of capital importance whenever the feasibility of a future wind farm is being studied. This power estimation relies mostly on three aspects: (1) a reliable measurement of the wind resource in the area, (2) a well-established power curve of the future wind turbines and, (3) an accurate characterization of the wake effects; the latter being arguably the most challenging one due to the complexity of the phenomenon and the lack of extensive full-scale data sets that could be used to validate analytical or numerical models. The current project addresses the problem of obtaining a volumetric description of a full-scale wake of a 2MW wind turbine in terms of velocity deficit and turbulence intensity using three scanning wind LiDARs and two sonic anemometers. The characterization of the upstream flow conditions is done by one scanning LiDAR and two sonic anemometers, which have been used to calculate incoming vertical profiles of horizontal wind speed, wind direction and an approximation to turbulence intensity, as well as the thermal stability of the atmospheric boundary layer. The characterization of the wake is done by two scanning LiDARs working simultaneously and pointing downstream from the base of the wind turbine. The direct LiDAR measurements in terms of radial wind speed can be corrected using the upstream conditions in order to provide good estimations of the horizontal wind speed at any point downstream of the wind turbine. All this data combined allow for the volumetric reconstruction of the wake in terms of velocity deficit as well as turbulence intensity. Finally, the predictions of a 3D analytical model [1] are compared to the 3D LiDAR measurements of the wind turbine. The model is derived by applying the laws of conservation of mass and momentum and assuming a Gaussian distribution for the velocity deficit in the wake. This model has already been validated using high resolution wind-tunnel measurements

  15. Joint calibration of 3D resist image and CDSEM

    NASA Astrophysics Data System (ADS)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  16. Digital acquisition system for high-speed 3-D imaging

    NASA Astrophysics Data System (ADS)

    Yafuso, Eiji

    1997-11-01

    High-speed digital three-dimensional (3-D) imagery is possible using multiple independent charge-coupled device (CCD) cameras with sequentially triggered acquisition and individual field storage capability. The system described here utilizes sixteen independent cameras, providing versatility in configuration and image acquisition. By aligning the cameras in nearly coincident lines-of-sight, a sixteen frame two-dimensional (2-D) sequence can be captured. The delays can be individually adjusted lo yield a greater number of acquired frames during the more rapid segments of the event. Additionally, individual integration periods may be adjusted to ensure adequate radiometric response while minimizing image blur. An alternative alignment and triggering scheme arranges the cameras into two angularly separated banks of eight cameras each. By simultaneously triggering correlated stereo pairs, an eight-frame sequence of stereo images may be captured. In the first alignment scheme the camera lines-of-sight cannot be made precisely coincident. Thus representation of the data as a monocular sequence introduces the issue of independent camera coordinate registration with the real scene. This issue arises more significantly using the stereo pair method to reconstruct quantitative 3-D spatial information of the event as a function of time. The principal development here will be the derivation and evaluation of a solution transform and its inverse for the digital data which will yield a 3-D spatial mapping as a function of time.

  17. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown.

  18. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  19. Validation of image processing tools for 3-D fluorescence microscopy.

    PubMed

    Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric

    2002-04-01

    3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.

  20. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  1. Automated spatial alignment of 3D torso images.

    PubMed

    Bose, Arijit; Shah, Shishir K; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2011-01-01

    This paper describes an algorithm for automated spatial alignment of three-dimensional (3D) surface images in order to achieve a pre-defined orientation. Surface images of the torso are acquired from breast cancer patients undergoing reconstructive surgery to facilitate objective evaluation of breast morphology pre-operatively (for treatment planning) and/or post-operatively (for outcome assessment). Based on the viewing angle of the multiple cameras used for stereophotography, the orientation of the acquired torso in the images may vary from the normal upright position. Consequently, when translating this data into a standard 3D framework for visualization and analysis, the co-ordinate geometry differs from the upright position making robust and standardized comparison of images impractical. Moreover, manual manipulation and navigation of images to the desired upright position is subject to user bias. Automating the process of alignment and orientation removes operator bias and permits robust and repeatable adjustment of surface images to a pre-defined or desired spatial geometry.

  2. Integral imaging based 3D display of holographic data.

    PubMed

    Yöntem, Ali Özgür; Onural, Levent

    2012-10-22

    We propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on an integral imaging based display setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated data or digitally acquired optical data. The method shows the connection between a hologram (diffraction pattern) and an elemental image set of the same 3D object. We showed three examples, one of which is the digitally captured optical diffraction tomography data of an epithelium cell. We obtained optical reconstructions with our integral imaging display setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement.

  3. A hybrid framework for 3D medical image segmentation.

    PubMed

    Chen, Ting; Metaxas, Dimitris

    2005-12-01

    In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework.

  4. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  5. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  6. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen.

  7. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  8. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  9. Volumetric two-photon imaging of neurons using stereoscopy (vTwINS).

    PubMed

    Song, Alexander; Charles, Adam S; Koay, Sue Ann; Gauthier, Jeff L; Thiberge, Stephan Y; Pillow, Jonathan W; Tank, David W

    2017-04-01

    Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large-scale recording of neural activity in vivo. Here, we introduce volumetric two-photon imaging of neurons using stereoscopy (vTwINS), a volumetric calcium imaging method that uses an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced 'image pairs' in the resulting 2D image, and the separation distance between projections is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a modified orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrated vTwINS by imaging neural population activity in the mouse primary visual cortex and hippocampus. Our results demonstrated that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame rate.

  10. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  11. 3D imaging of biological specimen using MS.

    PubMed

    Fletcher, John S

    2015-01-01

    Imaging MS can provide unique information about the distribution of native and non-native compounds in biological specimen. MALDI MS and secondary ion MS are the two most commonly applied imaging MS techniques and can provide complementary information about a sample. MALDI offers access to high mass species such as proteins while secondary ion MS can operate at higher spatial resolution and provide information about lower mass species including elemental signals. Imaging MS is not limited to two dimensions and different approaches have been developed that allow 3D molecular images to be generated of chemicals in whole organs down to single cells. Resolution in the z-dimension is often higher than in x and y, so such analysis offers the potential for probing the distribution of drug molecules and studying drug action by MS with a much higher precision - possibly even organelle level.

  12. 3D Gabor wavelet based vessel filtering of photoacoustic images.

    PubMed

    Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi

    2016-08-01

    Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.

  13. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  14. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  15. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  16. Real Time Gabor-Domain Optical Coherence Microscopy for 3D Imaging.

    PubMed

    Rolland, Jannick P; Canavesi, Cristina; Tankam, Patrice; Cogliati, Andrea; Lanis, Mara; Santhanam, Anand P

    2016-01-01

    Fast, robust, nondestructive 3D imaging is needed for the characterization of microscopic tissue structures across various clinical applications. A custom microelectromechanical system (MEMS)-based 2D scanner was developed to achieve, together with a multi-level GPU architecture, 55 kHz fast-axis A-scan acquisition in a Gabor-domain optical coherence microscopy (GD-OCM) custom instrument. GD-OCM yields high-definition micrometer-class volumetric images. A dynamic depth of focusing capability through a bio-inspired liquid lens-based microscope design, as in whales' eyes, was developed to enable the high definition instrument throughout a large field of view of 1 mm3 volume of imaging. Developing this technology is prime to enable integration within the workflow of clinical environments. Imaging at an invariant resolution of 2 μm has been achieved throughout a volume of 1 × 1 × 0.6 mm3, acquired in less than 2 minutes. Volumetric scans of human skin in vivo and an excised human cornea are presented.

  17. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements <2 mm) and precision (mean point to plane error <2 mm) at an average resolution of at least 390 points per cm(2). Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p < 0.001). The choice of object color can influence measurement range and precision. Although Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  18. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  19. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  20. Semiautomatic segmentation of liver metastases on volumetric CT images

    SciTech Connect

    Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng

    2015-11-15

    Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accurately delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation

  1. 3D Imaging of the OH mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Kouahla, M. N.; Moreels, G.; Faivre, M.; Clairemidi, J.; Meriwether, J. W.; Lehmacher, G. A.; Vidal, E.; Veliz, O.

    2010-01-01

    A new and original stereo imaging method is introduced to measure the altitude of the OH nightglow layer and provide a 3D perspective map of the altitude of the layer centroid. Near-IR photographs of the OH layer are taken at two sites separated by a 645 km distance. Each photograph is processed in order to provide a satellite view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized cross-correlation coefficient (NCC). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12°09‧08.2″ S, 75°33‧49.3″ W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16°33‧17.6″ S, 71°39‧59.4″ W, altitude 2272 m) close to Arequipa. 3D maps of the layer surface were retrieved and compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 86.3 km on July 26. Comparable relief wavy features appear in the 3D and intensity maps. It is shown that the vertical amplitude of the wave system varies as exp (Δz/2H) within the altitude range Δz = 83.5-88.0 km, H being the scale height. The oscillatory kinetic energy at the altitude of the OH layer is comprised between 3 × 10-4 and 5.4 × 10-4 J/m3, which is 2-3 times smaller than the values derived from partial radio wave at 52°N latitude.

  2. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  3. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  4. 3D geometry-based quantification of colocalizations in multichannel 3D microscopy images of human soft tissue tumors.

    PubMed

    Wörz, Stefan; Sander, Petra; Pfannmöller, Martin; Rieker, Ralf J; Joos, Stefan; Mechtersheimer, Gunhild; Boukamp, Petra; Lichter, Peter; Rohr, Karl

    2010-08-01

    We introduce a new model-based approach for automatic quantification of colocalizations in multichannel 3D microscopy images. The approach uses different 3D parametric intensity models in conjunction with a model fitting scheme to localize and quantify subcellular structures with high accuracy. The central idea is to determine colocalizations between different channels based on the estimated geometry of the subcellular structures as well as to differentiate between different types of colocalizations. A statistical analysis was performed to assess the significance of the determined colocalizations. This approach was used to successfully analyze about 500 three-channel 3D microscopy images of human soft tissue tumors and controls.

  5. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  6. Anatomical delineation of congenital heart disease using 3D magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Adams Bornemeier, Renee; Fellows, Kenneth E.; Fogel, Mark A.; Weinberg, Paul M.

    1994-05-01

    Anatomic delineation of the heart and great vessels is a necessity when managing children with congenital heart disease. Spatial orientation of the vessels and chambers in the heart and the heart itself may be quite abnormal. Though magnetic resonance imaging provides a noninvasive means for determining the anatomy, the intricate interrelationships between many structures are difficult to conceptualize from a 2-D format. Taking the 2-D images and using a volumetric analysis package allows for a 3-D replica of the heart to be created. This model can then be used to view the anatomy and spatial arrangement of the cardiac structures. This information may be utilized by the physicians to assist in the clinical management of these children.

  7. Image segmentation to inspect 3-D object sizes

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Pin; Fuh, Chiou-Shann

    1996-01-01

    Object size inspection is an important task and has various applications in computer vision. For example, the automatic control of stone-breaking machines, which perform better if the sizes of the stones to be broken can be predicted. An algorithm is proposed for image segmentation in size inspection for almost round stones with high or low texture. Although our experiments are focused on stones, the algorithm can be applied to other 3-D objects. We use one fixed camera and four light sources at four different positions one at a time, to take four images. Then we compute the image differences and binarize them to extract edges. We explain, step by step, the photographing, the edge extraction, the noise removal, and the edge gap filling. Experimental results are presented.

  8. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  9. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Barnowski, Ross; Haefner, Andrew; Mihailescu, Lucian; Vetter, Kai

    2015-11-01

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  10. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields.

  11. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  12. 3-D imaging and illustration of mouse intestinal neurovascular complex.

    PubMed

    Fu, Ya-Yuan; Peng, Shih-Jung; Lin, Hsin-Yao; Pasricha, Pankaj J; Tang, Shiue-Cheng

    2013-01-01

    Because of the dispersed nature of nerves and blood vessels, standard histology cannot provide a global and associated observation of the enteric nervous system (ENS) and vascular network. We prepared transparent mouse intestine and combined vessel painting and three-dimensional (3-D) neurohistology for joint visualization of the ENS and vasculature. Cardiac perfusion of the fluorescent wheat germ agglutinin (vessel painting) was used to label the ileal blood vessels. The pan-neuronal marker PGP9.5, sympathetic neuronal marker tyrosine hydroxylase (TH), serotonin, and glial markers S100B and GFAP were used as the immunostaining targets of neural tissues. The fluorescently labeled specimens were immersed in the optical clearing solution to improve photon penetration for 3-D confocal microscopy. Notably, we simultaneously revealed the ileal microstructure, vasculature, and innervation with micrometer-level resolution. Four examples are given: 1) the morphology of the TH-labeled sympathetic nerves: sparse in epithelium, perivascular at the submucosa, and intraganglionic at myenteric plexus; 2) distinct patterns of the extrinsic perivascular and intrinsic pericryptic innervation at the submucosal-mucosal interface; 3) different associations of serotonin cells with the mucosal neurovascular elements in the villi and crypts; and 4) the periganglionic capillary network at the myenteric plexus and its contact with glial fibers. Our 3-D imaging approach provides a useful tool to simultaneously reveal the nerves and blood vessels in a space continuum for panoramic illustration and analysis of the neurovascular complex to better understand the intestinal physiology and diseases.

  13. Effective classification of 3D image data using partitioning methods

    NASA Astrophysics Data System (ADS)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  14. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  15. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  16. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the

  17. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  18. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  19. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  20. 3-D Imaging of Partly Concealed Targets by Laser Radar

    DTIC Science & Technology

    2005-10-01

    laser in the green wavelength region was used for illumination. 3-D Imaging of Partly Concealed Targets by Laser Radar 11 - 8 RTO-MP-SET-094...acknowledge Marie Carlsson and Ann Charlotte Gustavsson for their assistance in some of the experiments. 7.0 REFERENCES [1] U. Söderman, S. Ahlberg...SPIE Vol. 3707, pp. 432-448, USA, 1999. [14] D. Letalick, H. Larsson, M. Carlsson, and A.-C. Gustavsson , “Laser sensors for urban warfare,” FOI

  1. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  2. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy

    PubMed Central

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-01-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination. PMID:27231617

  3. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  4. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    NASA Astrophysics Data System (ADS)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  5. Size-based emphysema cluster analysis on low attenuation area in 3D volumetric CT: comparison with pulmonary functional test

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Kim, Namkug; Lee, Sang Min; Seo, Joon Beom; Oh, Sang Young

    2015-03-01

    To quantify low attenuation area (LAA) of emphysematous regions according to cluster size in 3D volumetric CT data of chronic obstructive pulmonary disease (COPD) patients and to compare these indices with their pulmonary functional test (PFT). Sixty patients with COPD were scanned by a more than 16-multi detector row CT scanner (Siemens Sensation 16 and 64) within 0.75mm collimation. Based on these LAA masks, a length scale analysis to estimate each emphysema LAA's size was performed as follows. At first, Gaussian low pass filter from 30mm to 1mm kernel size with 1mm interval on the mask was performed from large to small size, iteratively. Centroid voxels resistant to the each filter were selected and dilated by the size of the kernel, which was regarded as the specific size emphysema mask. The slopes of area and number of size based LAA (slope of semi-log plot) were analyzed and compared with PFT. PFT parameters including DLco, FEV1, and FEV1/FVC were significantly (all p-value< 0.002) correlated with the slopes (r-values; -0.73, 0.54, 0.69, respectively) and EI (r-values; -0.84, -0.60, -0.68, respectively). In addition, the D independently contributed regression for FEV1 and FEV1/FVC (adjust R sq. of regression study: EI only, 0.70, 0.45; EI and D, 0.71, 0.51, respectively). By the size based LAA segmentation and analysis, we evaluated the Ds of area, number, and distribution of size based LAA, which would be independent factors for predictor of PFT parameters.

  6. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation.

  7. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  8. Imaging Shallow Salt With 3D Refraction Migration

    NASA Astrophysics Data System (ADS)

    Vanschuyver, C. J.; Hilterman, F. J.

    2005-05-01

    In offshore West Africa, numerous salt walls are within 200 m of sea level. Because of the shallowness of these salt walls, reflections from the salt top can be difficult to map, making it impossible to build an accurate velocity model for subsequent pre-stack depth migration. An accurate definition of salt boundaries is critical to any depth model where salt is present. Unfortunately, when a salt body is very shallow, the reflection from the upper interface can be obscured due to large offsets between the source and near receivers and also due to the interference from multiples and other near-surface noise events. A new method is described using 3D migration of the refraction waveforms which is simplified because of several constraints in the model definition. The azimuth and dip of the refractor is found by imaging with Kirchhoff theory. A Kirchhoff migration is performed where the traveltime values are adjusted to use the CMP refraction traveltime equation. I assume the sediment and salt velocities to be known such that once the image time is specified, then the dip and azimuth of the refraction path can be found. The resulting 3D refraction migrations are in excellent depth agreement with available well control. In addition, the refraction migration time picks of deeper salt events are in agreement with time picks of the same events on the reflection migration.

  9. 3-D visualization and animation technologies in anatomical imaging.

    PubMed

    McGhee, John

    2010-02-01

    This paper explores a 3-D computer artist's approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation.

  10. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  11. 3-D Imaging and Simulation for Nephron Sparing Surgical Training.

    PubMed

    Ahmadi, Hamed; Liu, Jen-Jane

    2016-08-01

    Minimally invasive partial nephrectomy (MIPN) is now considered the procedure of choice for small renal masses largely based on functional advantages over traditional open surgery. Lack of haptic feedback, the need for spatial understanding of tumor borders, and advanced operative techniques to minimize ischemia time or achieve zero-ischemia PN are among factors that make MIPN a technically demanding operation with a steep learning curve for inexperienced surgeons. Surgical simulation has emerged as a useful training adjunct in residency programs to facilitate the acquisition of these complex operative skills in the setting of restricted work hours and limited operating room time and autonomy. However, the majority of available surgical simulators focus on basic surgical skills, and procedure-specific simulation is needed for optimal surgical training. Advances in 3-dimensional (3-D) imaging have also enhanced the surgeon's ability to localize tumors intraoperatively. This article focuses on recent procedure-specific simulation models for laparoscopic and robotic-assisted PN and advanced 3-D imaging techniques as part of pre- and some cases, intraoperative surgical planning.

  12. Experiments on terahertz 3D scanning microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  13. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  14. Quantification of cerebral ventricle volume change of preterm neonates using 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Chen, Yimin; Kishimoto, Jessica; Qiu, Wu; de Ribaupierre, Sandrine; Fenster, Aaron; Chiu, Bernard

    2015-03-01

    Intraventricular hemorrhage (IVH) is a major cause of brain injury in preterm neonates. Quantitative measurement of ventricular dilation or shrinkage is important for monitoring patients and in evaluation of treatment options. 3D ultrasound (US) has been used to monitor the ventricle volume as a biomarker for ventricular dilation. However, volumetric quantification does not provide information as to where dilation occurs. The location where dilation occurs may be related to specific neurological problems later in life. For example, posterior horn enlargement, with thinning of the corpus callosum and parietal white matter fibres, could be linked to poor visuo-spatial abilities seen in hydrocephalic children. In this work, we report on the development and application of a method used to analyze local surface change of the ventricles of preterm neonates with IVH from 3D US images. The technique is evaluated using manual segmentations from 3D US images acquired in two imaging sessions. The surfaces from baseline and follow-up were registered and then matched on a point-by-point basis. The distance between each pair of corresponding points served as an estimate of local surface change of the brain ventricle at each vertex. The measurements of local surface change were then superimposed on the ventricle surface to produce the 3D local surface change map that provide information on the spatio-temporal dilation pattern of brain ventricles following IVH. This tool can be used to monitor responses to different treatment options, and may provide important information for elucidating the deficiencies a patient will have later in life.

  15. Evaluation of 3D pre-treatment verification for volumetric modulated arc therapy plan in head region

    NASA Astrophysics Data System (ADS)

    Ruangchan, S.; Oonsiri, S.; Suriyapee, S.

    2016-03-01

    The development of pre-treatment QA tools contributes to the three dimension (3D) dose verification using the calculation software with the measured planar dose distribution. This research is aimed to evaluate the Sun Nuclear 3DVH software with Thermo luminescence dosimeter (TLD) measurement. The two VMAT patient plans (2.5 arcs) of 6 MV photons with different PTV locations were transferred to the Rando phantom images. The PTV of the first plan located in homogeneous area and vice versa in the second plan. For treatment planning process, the Rando phantom images were employed in optimization and calculation with the PTV, brain stem, lens and TLD position contouring. The verification plans were created, transferred to the ArcCHECK for measurement and calculated the 3D dose using 3DVH software. The range of the percent dose differences in both PTV and organ at risk (OAR) between TLD and 3DVH software of the first and the second plans were -2.09 to 3.87% and -1.39 to 6.88%, respectively. The mean percent dose differences for the PTV were 1.62% and 3.93% for the first and the second plans, respectively. In conclusion, the 3DVH software results show good agreement with TLD when the tumor located in the homogeneous area.

  16. Hyperspectral image classification based on volumetric texture and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Su, Hongjun; Sheng, Yehua; Du, Peijun; Chen, Chen; Liu, Kui

    2015-06-01

    A novel approach using volumetric texture and reduced-spectral features is presented for hyperspectral image classification. Using this approach, the volumetric textural features were extracted by volumetric gray-level co-occurrence matrices (VGLCM). The spectral features were extracted by minimum estimated abundance covariance (MEAC) and linear prediction (LP)-based band selection, and a semi-supervised k-means (SKM) clustering method with deleting the worst cluster (SKMd) bandclustering algorithms. Moreover, four feature combination schemes were designed for hyperspectral image classification by using spectral and textural features. It has been proven that the proposed method using VGLCM outperforms the gray-level co-occurrence matrices (GLCM) method, and the experimental results indicate that the combination of spectral information with volumetric textural features leads to an improved classification performance in hyperspectral imagery.

  17. Abdominal aortic aneurysm imaging with 3-D ultrasound: 3-D-based maximum diameter measurement and volume quantification.

    PubMed

    Long, A; Rouet, L; Debreuve, A; Ardon, R; Barbe, C; Becquemin, J P; Allaire, E

    2013-08-01

    The clinical reliability of 3-D ultrasound imaging (3-DUS) in quantification of abdominal aortic aneurysm (AAA) was evaluated. B-mode and 3-DUS images of AAAs were acquired for 42 patients. AAAs were segmented. A 3-D-based maximum diameter (Max3-D) and partial volume (Vol30) were defined and quantified. Comparisons between 2-D (Max2-D) and 3-D diameters and between orthogonal acquisitions were performed. Intra- and inter-observer reproducibility was evaluated. Intra- and inter-observer coefficients of repeatability (CRs) were less than 5.18 mm for Max3-D. Intra-observer and inter-observer CRs were respectively less than 6.16 and 8.71 mL for Vol30. The mean of normalized errors of Vol30 was around 7%. Correlation between Max2-D and Max3-D was 0.988 (p < 0.0001). Max3-D and Vol30 were not influenced by a probe rotation of 90°. Use of 3-DUS to quantify AAA is a new approach in clinical practice. The present study proposed and evaluated dedicated parameters. Their reproducibility makes the technique clinically reliable.

  18. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  19. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  20. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  1. Very high frame rate volumetric integration of depth images on mobile devices.

    PubMed

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  2. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the

  3. Complex Resistivity 3D Imaging for Ground Reinforcement Site

    NASA Astrophysics Data System (ADS)

    Son, J.; Kim, J.; Park, S.

    2012-12-01

    Induced polarization (IP) method is used for mineral exploration and generally classified into two categories, time and frequency domain method. IP method in frequency domain measures amplitude and absolute phase to the transmitted currents, and is often called spectral induced polarization (SIP) when measurement is made for the wide-band frequencies. Our research group has been studying the modeling and inversion algorithms of complex resistivity method since several years ago and recently started to apply this method for various field applications. We already completed the development of 2/3D modeling and inversion program and developing another algorithm to use wide-band data altogether. Until now complex resistivity (CR) method was mainly used for the surface or tomographic survey of mineral exploration. Through the experience, we can find that the resistivity section from CR method is very similar with that of conventional resistivity method. Interpretation of the phase section is generally well matched with the geological information of survey area. But because most of survey area has very touch and complex terrain, 2D survey and interpretation are used generally. In this study, the case study of 3D CR survey conducted for the site where ground reinforcement was done to prevent the subsidence will be introduced. Data was acquired with the Zeta system, the complex resistivity measurement system produced by Zonge Co. using 8 frequencies from 0.125 to 16 Hz. 2D survey was conducted for total 6 lines with 5 m dipole spacing and 20 electrodes. Line length is 95 meter for every line. Among these 8 frequency data, data below 1 Hz was used considering its quality. With the 6 line data, 3D inversion was conducted. Firstly 2D interpretation was made with acquired data and its results were compared with those of resistivity survey. Resulting resistivity image sections of CR and resistivity method were very similar. Anomalies in phase image section showed good agreement

  4. High Time Resolution Photon Counting 3D Imaging Sensors

    NASA Astrophysics Data System (ADS)

    Siegmund, O.; Ertley, C.; Vallerga, J.

    2016-09-01

    Novel sealed tube microchannel plate (MCP) detectors using next generation cross strip (XS) anode readouts and high performance electronics have been developed to provide photon counting imaging sensors for Astronomy and high time resolution 3D remote sensing. 18 mm aperture sealed tubes with MCPs and high efficiency Super-GenII or GaAs photocathodes have been implemented to access the visible/NIR regimes for ground based research, astronomical and space sensing applications. The cross strip anode readouts in combination with PXS-II high speed event processing electronics can process high single photon counting event rates at >5 MHz ( 80 ns dead-time per event), and time stamp events to better than 25 ps. Furthermore, we are developing a high speed ASIC version of the electronics for low power/low mass spaceflight applications. For a GaAs tube the peak quantum efficiency has degraded from 30% (at 560 - 850 nm) to 25% over 4 years, but for Super-GenII tubes the peak quantum efficiency of 17% (peak at 550 nm) has remained unchanged for over 7 years. The Super-GenII tubes have a uniform spatial resolution of <30 μm FWHM ( 1 x106 gain) and single event timing resolution of 100 ps (FWHM). The relatively low MCP gain photon counting operation also permits longer overall sensor lifetimes and high local counting rates. Using the high timing resolution, we have demonstrated 3D object imaging with laser pulse (630 nm 45 ps jitter Pilas laser) reflections in single photon counting mode with spatial and depth sensitivity of the order of a few millimeters. A 50 mm Planacon sealed tube was also constructed, using atomic layer deposited microchannel plates which potentially offer better overall sealed tube lifetime, quantum efficiency and gain stability. This tube achieves standard bialkali quantum efficiency levels, is stable, and has been coupled to the PXS-II electronics and used to detect and image fast laser pulse signals.

  5. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  6. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  7. Fast 3-d tomographic microwave imaging for breast cancer detection.

    PubMed

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring.

  8. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  9. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  10. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    NASA Astrophysics Data System (ADS)

    Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

    2011-09-01

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  11. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  12. Volumetric texture analysis of breast lesions on contrast-enhanced magnetic resonance images.

    PubMed

    Chen, Weijie; Giger, Maryellen L; Li, Hui; Bick, Ulrich; Newstead, Gillian M

    2007-09-01

    Automated image analysis aims to extract relevant information from contrast-enhanced magnetic resonance images (CE-MRI) of the breast and improve the accuracy and consistency of image interpretation. In this work, we extend the traditional 2D gray-level co-occurrence matrix (GLCM) method to investigate a volumetric texture analysis approach and apply it for the characterization of breast MR lesions. Our database of breast MR images was obtained using a T1-weighted 3D spoiled gradient echo sequence and consists of 121 biopsy-proven lesions (77 malignant and 44 benign). A fuzzy c-means clustering (FCM) based method is employed to automatically segment 3D breast lesions on CE-MR images. For each 3D lesion, a nondirectional GLCM is then computed on the first postcontrast frame by summing 13 directional GLCMs. Texture features are extracted from the nondirectional GLCMs and the performance of each texture feature in the task of distinguishing between malignant and benign breast lesions is assessed by receiver operating characteristics (ROC) analysis. Our results show that the classification performance of volumetric texture features is significantly better than that based on 2D analysis. Our investigations of the effects of various of parameters on the diagnostic accuracy provided means for the optimal use of the approach.

  13. Quantitative Multiscale Cell Imaging in Controlled 3D Microenvironments

    PubMed Central

    Welf, Erik S.; Driscoll, Meghan K.; Dean, Kevin M.; Schäfer, Claudia; Chu, Jun; Davidson, Michael W.; Lin, Michael Z.; Danuser, Gaudenz; Fiolka, Reto

    2016-01-01

    The microenvironment determines cell behavior, but the underlying molecular mechanisms are poorly understood because quantitative studies of cell signaling and behavior have been challenging due to insufficient spatial and/or temporal resolution and limitations on microenvironmental control. Here we introduce microenvironmental selective plane illumination microscopy (meSPIM) for imaging and quantification of intracellular signaling and submicrometer cellular structures as well as large-scale cell morphological and environmental features. We demonstrate the utility of this approach by showing that the mechanical properties of the microenvironment regulate the transition of melanoma cells from actin-driven protrusion to blebbing, and we present tools to quantify how cells manipulate individual collagen fibers. We leverage the nearly isotropic resolution of meSPIM to quantify the local concentration of actin and phosphatidylinositol 3-kinase signaling on the surfaces of cells deep within 3D collagen matrices and track the many small membrane protrusions that appear in these more physiologically relevant environments. PMID:26906741

  14. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    NASA Astrophysics Data System (ADS)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  15. 3D x-ray reconstruction using lightfield imaging

    NASA Astrophysics Data System (ADS)

    Saha, Sajib; Tahtali, Murat; Lambert, Andrew; Pickering, Mark R.

    2014-09-01

    Existing Computed Tomography (CT) systems require full 360° rotation projections. Using the principles of lightfield imaging, only 4 projections under ideal conditions can be sufficient when the object is illuminated with multiple-point Xray sources. The concept was presented in a previous work with synthetically sampled data from a synthetic phantom. Application to real data requires precise calibration of the physical set up. This current work presents the calibration procedures along with experimental findings for the reconstruction of a physical 3D phantom consisting of simple geometric shapes. The crucial part of this process is to determine the effective distances of the X-ray paths, which are not possible or very difficult by direct measurements. Instead, they are calculated by tracking the positions of fiducial markers under prescribed source and object movements. Iterative algorithms are used for the reconstruction. Customized backprojection is used to ensure better initial guess for the iterative algorithms to start with.

  16. 3D imaging of semiconductor components by discrete laminography

    NASA Astrophysics Data System (ADS)

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  17. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  18. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-06

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined.

  19. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  20. Multi Length Scale Imaging of Flocculated Estuarine Sediments; Insights into their Complex 3D Structure

    NASA Astrophysics Data System (ADS)

    Wheatland, Jonathan; Bushby, Andy; Droppo, Ian; Carr, Simon; Spencer, Kate

    2015-04-01

    Suspended estuarine sediments form flocs that are compositionally complex, fragile and irregularly shaped. The fate and transport of suspended particulate matter (SPM) is determined by the size, shape, density, porosity and stability of these flocs and prediction of SPM transport requires accurate measurements of these three-dimensional (3D) physical properties. However, the multi-scaled nature of flocs in addition to their fragility makes their characterisation in 3D problematic. Correlative microscopy is a strategy involving the spatial registration of information collected at different scales using several imaging modalities. Previously, conventional optical microscopy (COM) and transmission electron microscopy (TEM) have enabled 2-dimensional (2D) floc characterisation at the gross (> 1 µm) and sub-micron scales respectively. Whilst this has proven insightful there remains a critical spatial and dimensional gap preventing the accurate measurement of geometric properties and an understanding of how structures at different scales are related. Within life sciences volumetric imaging techniques such as 3D micro-computed tomography (3D µCT) and focused ion beam scanning electron microscopy [FIB-SEM (or FIB-tomography)] have been combined to characterise materials at the centimetre to micron scale. Combining these techniques with TEM enables an advanced correlative study, allowing material properties across multiple spatial and dimensional scales to be visualised. The aims of this study are; 1) to formulate an advanced correlative imaging strategy combining 3D µCT, FIB-tomography and TEM; 2) to acquire 3D datasets; 3) to produce a model allowing their co-visualisation; 4) to interpret 3D floc structure. To reduce the chance of structural alterations during analysis samples were first 'fixed' in 2.5% glutaraldehyde/2% formaldehyde before being embedding in Durcupan resin. Intermediate steps were implemented to improve contrast and remove pore water, achieved by the

  1. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  2. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  3. In vivo volumetric fluorescence sectioning microscopy with mechanical-scan-free hybrid illumination imaging

    PubMed Central

    Lin, Chen-Yen; Lin, Wei-Hsin; Chien, Ju-Hsuan; Tsai, Jui-Chang; Luo, Yuan

    2016-01-01

    Optical sectioning microscopy in wide-field fashion has been widely used to obtain three-dimensional images of biological samples; however, it requires scanning in depth and considerable time to acquire multiple depth information of a volumetric sample. In this paper, in vivo optical sectioning microscopy with volumetric hybrid illumination, with no mechanical moving parts, is presented. The proposed system is configured such that the optical sectioning is provided by hybrid illumination using a digital micro-mirror device (DMD) for uniform and non-uniform pattern projection, while the depth of imaging planes is varied by using an electrically tunable-focus lens with invariant magnification and resolution. We present and characterize the design, implementation, and experimentally demonstrate the proposed system’s ability through 3D imaging of in vivo Canenorhabditis elegans’ growth cones. PMID:27867708

  4. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  5. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  6. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  7. 3D optical coherence tomography image registration for guiding cochlear implant insertion

    NASA Astrophysics Data System (ADS)

    Cheon, Gyeong-Woo; Jeong, Hyun-Woo; Chalasani, Preetham; Chien, Wade W.; Iordachita, Iulian; Taylor, Russell; Niparko, John; Kang, Jin U.

    2014-03-01

    In cochlear implant surgery, an electrode array is inserted into the cochlear canal to restore hearing to a person who is profoundly deaf or significantly hearing impaired. One critical part of the procedure is the insertion of the electrode array, which looks like a thin wire, into the cochlear canal. Although X-ray or computed tomography (CT) could be used as a reference to evaluate the pathway of the whole electrode array, there is no way to depict the intra-cochlear canal and basal turn intra-operatively to help guide insertion of the electrode array. Optical coherent tomography (OCT) is a highly effective way of visualizing internal structures of cochlea. Swept source OCT (SSOCT) having center wavelength of 1.3 micron and 2D Galvonometer mirrors was used to achieve 7-mm depth 3-D imaging. Graphics processing unit (GPU), OpenGL, C++ and C# were integrated for real-time volumetric rendering simultaneously. The 3D volume images taken by the OCT system were assembled and registered which could be used to guide a cochlear implant. We performed a feasibility study using both dry and wet temporal bones and the result is presented.

  8. Ring array transducers for real-time 3-D imaging of an atrial septal occluder.

    PubMed

    Light, Edward D; Lindsey, Brooks D; Upchurch, Joseph A; Smith, Stephen W

    2012-08-01

    We developed new miniature ring array transducers integrated into interventional device catheters such as used to deploy atrial septal occluders. Each ring array consisted of 55 elements operating near 5 MHz with interelement spacing of 0.20 mm. It was constructed on a flat piece of copper-clad polyimide and then wrapped around an 11 French O.D. catheter. We used a braided cabling technology from Tyco Electronics Corporation to connect the elements to the Volumetric Medical Imaging (VMI) real-time 3-D ultrasound scanner. Transducer performance yielded a -6 dB fractional bandwidth of 20% centered at 4.7 MHz without a matching layer vs. average bandwidth of 60% centered at 4.4 MHz with a matching layer. Real-time 3-D rendered images of an en face view of a Gore Helex septal occluder in a water tank showed a finer texture of the device surface from the ring array with the matching layer.

  9. A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

    PubMed Central

    Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

    2014-01-01

    We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

  10. Continuous table acquisition MRI for radiotherapy treatment planning: Distortion assessment with a new extended 3D volumetric phantom

    SciTech Connect

    Walker, Amy Metcalfe, Peter; Liney, Gary; Holloway, Lois; Dowling, Jason; Rivest-Henault, David

    2015-04-15

    Purpose: Accurate geometry is required for radiotherapy treatment planning (RTP). When considering the use of magnetic resonance imaging (MRI) for RTP, geometric distortions observed in the acquired images should be considered. While scanner technology and vendor supplied correction algorithms provide some correction, large distortions are still present in images, even when considering considerably smaller scan lengths than those typically acquired with CT in conventional RTP. This study investigates MRI acquisition with a moving table compared with static scans for potential geometric benefits for RTP. Methods: A full field of view (FOV) phantom (diameter 500 mm; length 513 mm) was developed for measuring geometric distortions in MR images over volumes pertinent to RTP. The phantom consisted of layers of refined plastic within which vitamin E capsules were inserted. The phantom was scanned on CT to provide the geometric gold standard and on MRI, with differences in capsule location determining the distortion. MRI images were acquired with two techniques. For the first method, standard static table acquisitions were considered. Both 2D and 3D acquisition techniques were investigated. With the second technique, images were acquired with a moving table. The same sequence was acquired with a static table and then with table speeds of 1.1 mm/s and 2 mm/s. All of the MR images acquired were registered to the CT dataset using a deformable B-spline registration with the resulting deformation fields providing the distortion information for each acquisition. Results: MR images acquired with the moving table enabled imaging of the whole phantom length while images acquired with a static table were only able to image 50%–70% of the phantom length of 513 mm. Maximum distortion values were reduced across a larger volume when imaging with a moving table. Increased table speed resulted in a larger contribution of distortion from gradient nonlinearities in the through

  11. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  12. Hybrid Multiphoton Volumetric Functional Imaging of Large Scale Bioengineered Neuronal Networks

    PubMed Central

    Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-01-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bio-engineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes/sec of structures with mm-scale dimensions containing a network of over 1000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances. PMID:24898000

  13. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  14. 3D Seismic Imaging over a Potential Collapse Structure

    NASA Astrophysics Data System (ADS)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  15. Venus Topography in 3D: Imaging of Coronae and Chasmata

    NASA Astrophysics Data System (ADS)

    Jurdy, D. M.; Stefanick, M.; Stoddard, P. R.

    2006-12-01

    Venus' surface hosts hundreds of circular to elongate features, ranging from 60-2600 km, and averaging somewhat over 200 km, in diameter. These enigmatic structures have been classified as "coronae" and attributed to either tectono-volcanic or impact-related mechanisms. A linear to arcuate system of chasmata - rugged zones with some of Venus' deepest troughs, extend 1000's of kilometers. They have extreme relief, with elevations changing as much as 7 km in just 30 km distance. The 54,464 km-long Venus chasmata system defined in great detail by Magellan can be fit by great circle arcs at the 89.6% level, and when corrected for the smaller size of the planet, the total length of the chasmata system measures within 2.7% of the length of Earth's spreading ridges. The relatively young Beta-Atla-Themis region (BAT), within 30° of the equator from 180-300° longitude has the planet's strongest geoid highs and profuse volcanism. This BAT region, the intersection of three rift zones, also has a high coronal concentration, with individual coronae closely associated with the chasmata system. The chasmata with the greatest relief on Venus show linear rifting that prevailed in the latest stage of tectonic deformation. For a three-dimensional view of Venus' surface, we spread out the Magellan topography on a flat surface using a Mercator projection to preserve shape. Next we illuminate the surface with beams at angle 45° from left (or right) so as to simulate mid afternoon (or mid-morning). Finally, we observe the surface with two eyes looking through orange and azure colored filters respectively. This gives a 3D view of tectonic features in the BAT area. The 3D images clearly show coronae sharing boundaries with the chasmata. This suggests that the processes of rifting and corona-formation occur together. It seems unlikely that impact craters would create this pattern.

  16. In vivo real-time volumetric synthetic aperture ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Bouzari, Hamed; Rasmussen, Morten F.; Brandt, Andreas H.; Stuart, Matthias B.; Nikolov, Svetoslav; Jensen, Jørgen A.

    2015-03-01

    Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological. This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° × 90° field-of-view was achieved. data were obtained using a 3.5 MHz 32 × 32 elements 2-D phased array transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak-temporal-average intensity for parallel beam-forming (PB) are 0.83 and 377.5mW/cm2, and for SA are 0.48 and 329.5mW/cm2. A human kidney was volumetrically imaged with SA and PB techniques simultaneously. Two radiologists for evaluation of the volumetric SA were consulted by means of a questionnaire on the level of details perceivable in the beam-formed images. The comparison was against PB based on the in vivo data. The feedback from the domain experts indicates that volumetric SA images internal body structures with a better contrast resolution compared to PB at all positions in the entire imaged volume. Furthermore, the autocovariance of a homogeneous area in the in vivo SA data, had 23.5% smaller width at the half of its maximum value compared to PB.

  17. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  18. [3D virtual imaging of the upper airways].

    PubMed

    Ferretti, G; Coulomb, M

    2000-04-01

    The different three dimensional reconstructions of the upper airways that can be obtained with spiral computed tomograpy (CT) are presented here. The parameters indispensable to achieve as real as possible spiral CT images are recalled together with the advantages and disadvantages of the different techniues. Multislice reconstruction (MSR) produces slices in different planes of space with the high contrast of CT slices. They provide information similar to that obtained for the rare indications for thoracic MRI. Thick slice reconstructions with maximum intensity projection (MIP) or minimum intensity projection (minIP) give projection views where the contrast can be modified by selecting the more dense (MIP) or less dense (minIP) voxels. They find their application in the exploration of the upper airways. Surface and volume external 3D reconstructions can be obtained. They give an overall view of the upper airways, similar to a bronchogram. Virtual endoscopy reproduces real endoscopic images but cannot provide information on the aspect of the mucosa or biopsy specimens. It offers possible applications for preparing, guiding and controlling interventional fibroscopy procedures.

  19. Multiframe image point matching and 3-d surface reconstruction.

    PubMed

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  20. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    NASA Astrophysics Data System (ADS)

    Ranjan Gartia, Manas; Hsiao, Austin; Sivaguru, Mayandi; Chen, Yi; Logan Liu, G.

    2011-09-01

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  1. Advanced 3D polarimetric flash ladar imaging through foliage

    NASA Astrophysics Data System (ADS)

    Murray, James T.; Moran, Steven E.; Roddier, Nicolas; Vercillo, Richard; Bridges, Robert; Austin, William

    2003-08-01

    High-resolution three-dimensional flash ladar system technologies are under development that enables remote identification of vehicles and armament hidden by heavy tree canopies. We have developed a sensor architecture and design that employs a 3D flash ladar receiver to address this mission. The receiver captures 128×128×>30 three-dimensional images for each laser pulse fired. The voxel size of the image is 3"×3"×4" at the target location. A novel signal-processing algorithm has been developed that achieves sub-voxel (sub-inch) range precision estimates of target locations within each pixel. Polarization discrimination is implemented to augment the target-to-foliage contrast. When employed, this method improves the range resolution of the system beyond the classical limit (based on pulsewidth and detection bandwidth). Experiments were performed with a 6 ns long transmitter pulsewidth that demonstrate 1-inch range resolution of a tank-like target that is occluded by foliage and a range precision of 0.3" for unoccluded targets.

  2. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  3. Development and Calibration of New 3-D Vector VSP Imaging Technology: Vinton Salt Dome, LA

    SciTech Connect

    Kurt J. Marfurt; Hua-Wei Zhou; E. Charlotte Sullivan

    2004-09-01

    dense well control available at Vinton Dome. To more accurately estimate velocities, we developed a 3-D turning wave tomography algorithm adapted to the VSP geometry employed at Vinton Dome. We were able to determine that there is about 10% anisotropy at Vinton Dome, with the axis of transverse isotropy perpendicular to the geologic formations deformed by the diapirism. At the time of this final report, we have not yet integrated traveltimes from the surface data into the tomographic inversion to better constrain the velocity model, nor developed an anisotropic migration algorithm to image the 3-D 3-C VSP (objectives well beyond the original scope of the project). As a secondary objective, we developed a suite of new 3-D volumetric attribute algorithms and image enhancement algorithms. We estimate volumetric dip and azimuth using a multiwindow Kuwahara approach that avoids smoothing amplitude and dip across faults.

  4. 3D-3D registration of partial capitate bones using spin-images

    NASA Astrophysics Data System (ADS)

    Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin

    2013-03-01

    It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).

  5. Frames-Based Denoising in 3D Confocal Microscopy Imaging.

    PubMed

    Konstantinidis, Ioannis; Santamaria-Pang, Alberto; Kakadiaris, Ioannis

    2005-01-01

    In this paper, we propose a novel denoising method for 3D confocal microscopy data based on robust edge detection. Our approach relies on the construction of a non-separable frame system in 3D that incorporates the Sobel operator in dual spatial directions. This multidirectional set of digital filters is capable of robustly detecting edge information by ensemble thresholding of the filtered data. We demonstrate the application of our method to both synthetic and real confocal microscopy data by comparing it to denoising methods based on separable 3D wavelets and 3D median filtering, and report very encouraging results.

  6. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  7. Membrane-mirror-based display for viewing 2D and 3D images

    NASA Astrophysics Data System (ADS)

    McKay, Stuart; Mason, Steven; Mair, Leslie S.; Waddell, Peter; Fraser, Simon M.

    1999-05-01

    Stretchable Membrane Mirrors (SMMs) have been developed at the University of Strathclyde as a cheap, lightweight and variable focal length alternative to conventional fixed- curvature glass based optics. A SMM uses a thin sheet of aluminized polyester film which is stretched over a specially shaped frame, forming an airtight cavity behind the membrane. Removal of air from that cavity causes the resulting air pressure difference to force the membrane back into a concave shape. Controlling the pressure difference acting over the membrane now controls the curvature or f/No. of the mirror. Mirrors from 0.15-m to 1.2-m in diameter have been constructed at the University of Strathclyde. The use of lenses and mirrors to project real images in space is perhaps one of the simplest forms of 3D display. When using conventional optics however, there are severe financial restrictions on what size of image forming element may be used, hence the appeal of a SMM. The mirrors have been used both as image forming elements and directional screens in volumetric, stereoscopic and large format simulator displays. It was found that the use of these specular reflecting surfaces greatly enhances the perceived image quality of the resulting magnified display.

  8. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  9. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  10. Comparison of bootstrap resampling methods for 3-D PET imaging.

    PubMed

    Lartizien, C; Aubin, J-B; Buvat, I

    2010-07-01

    Two groups of bootstrap methods have been proposed to estimate the statistical properties of positron emission tomography (PET) images by generating multiple statistically equivalent data sets from few data samples. The first group generates resampled data based on a parametric approach assuming that data from which resampling is performed follows a Poisson distribution while the second group consists of nonparametric approaches. These methods either require a unique original sample or a series of statistically equivalent data that can be list-mode files or sinograms. Previous reports regarding these bootstrap approaches suggest different results. This work compares the accuracy of three of these bootstrap methods for 3-D PET imaging based on simulated data. Two methods are based on a unique file, namely a list-mode based nonparametric (LMNP) method and a sinogram based parametric (SP) method. The third method is a sinogram-based nonparametric (SNP) method. Another original method (extended LMNP) was also investigated, which is an extension of the LMNP methods based on deriving a resampled list-mode file by drawings events from multiple original list-mode files. Our comparison is based on the analysis of the statistical moments estimated on the repeated and resampled data. This includes the probability density function and the moments of order 1 and 2. Results show that the two methods based on multiple original data (SNP and extended LMNP) are the only methods that correctly estimate the statistical parameters. Performances of the LMNP and SP methods are variable. Simulated data used in this study were characterized by a high noise level. Differences among the tested strategies might be reduced with clinical data sets with lower noise.

  11. Two-Photon Laser Scanning Stereomicroscopy for Fast Volumetric Imaging

    PubMed Central

    Yang, Yanlong; Yao, Baoli; Lei, Ming; Dan, Dan; Li, Runze; Horn, Mark Van; Chen, Xun; Li, Yang; Ye, Tong

    2016-01-01

    Bessel beams have been successfully used in two-photon laser scanning fluorescence microscopy to extend the depth of field (EDF), which makes it possible to observe fast events volumetrically. However, the depth information is lost due to integration of fluorescence signals along the propagation direction. We describe the design and implementation of two-photon lasers scanning stereomicroscopy, which allows viewing dynamic processes in three-dimensional (3D) space stereoscopically in real-time with shutter glasses at the speed of 1.4 volumes per second. The depth information can be appreciated by human visual system or be recovered with correspondence algorithms for some cases. PMID:27997624

  12. Sub-Nyquist Sampling and Fourier Domain Beamforming in Volumetric Ultrasound Imaging.

    PubMed

    Burshtein, Amir; Birk, Michael; Chernyakova, Tanya; Eilam, Alon; Kempinski, Arcady; Eldar, Yonina C

    2016-05-01

    A key step in ultrasound image formation is digital beamforming of signals sampled by several transducer elements placed upon an array. High-resolution digital beamforming introduces the demand for sampling rates significantly higher than the signals' Nyquist rate, which greatly increases the volume of data that must be transmitted from the system's front end. In 3-D ultrasound imaging, 2-D transducer arrays rather than 1-D arrays are used, and more scan lines are needed. This implies that the amount of sampled data is vastly increased with respect to 2-D imaging. In this work, we show that a considerable reduction in data rate can be achieved by applying the ideas of Xampling and frequency domain beamforming (FDBF), leading to a sub-Nyquist sampling rate, which uses only a portion of the bandwidth of the ultrasound signals to reconstruct the image. We extend previous work on FDBF for 2-D ultrasound imaging to accommodate the geometry imposed by volumetric scanning and a 2-D grid of transducer elements. High image quality from low-rate samples is demonstrated by simulation of a phantom image composed of several small reflectors. Our technique is then applied to raw data of a heart ventricle phantom obtained by a commercial 3-D ultrasound system. We show that by performing 3-D beamforming in the frequency domain, sub-Nyquist sampling and low processing rate are achievable, while maintaining adequate image quality.

  13. A method for generating unfolded views of the stomach based on volumetric image deformation

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Oka, Hiroki; Kitasaka, Takayuki; Suenaga, Yasuhito

    2005-04-01

    This paper presents a method for virtually generating unfolded views of the stomach using volumetric image deformation. When we observe an organ with a large cavity in it, such as the stomach or the colon, by using a virtual endoscopy system, many changes of viewpoint and view direction are required. If virtually unfolded views of a target organ could be generated, doctors could easily diagnose the organ's inner walls only by one or a several views. In the proposed method, we extract a stomach wall region from a 3-D abdominal CT images and the obtained region is shrunken. For every voxel of the shrunken image, we allocate a hexahedron. In the deformation process, nodes and springs are allocated on the vertices, edges, and diagonals of each hexahedron. Neighboring hexahedrons share nodes and springs, except for the hexahedrons on the cutting line that a user specifies. The hexahedrons are deformed by adding forces that direct the nodes to the stretching plane to the nodes existing on the cutting line. The hexahedrons are deformed using iterative deformation calculation. By using the geometrical relations between hexahedrons before and after deformation, a volumetric image in which the stomach region is unfolded. Finally, the unfolded views are obtained by visualizing the reconstructed volume can be constructed. We applied the proposed method to eleven cases of 3-D abdominal CT images. The results show that the proposed method can accurately reproduce folds and lesions on the stomach.

  14. Statistical Inverse Ray Tracing for Image-Based 3D Modeling.

    PubMed

    Liu, Shubao; Cooper, David B

    2014-10-01

    This paper proposes a new formulation and solution to image-based 3D modeling (aka "multi-view stereo") based on generative statistical modeling and inference. The proposed new approach, named statistical inverse ray tracing, models and estimates the occlusion relationship accurately through optimizing a physically sound image generation model based on volumetric ray tracing. Together with geometric priors, they are put together into a Bayesian formulation known as Markov random field (MRF) model. This MRF model is different from typical MRFs used in image analysis in the sense that the ray clique, which models the ray-tracing process, consists of thousands of random variables instead of two to dozens. To handle the computational challenges associated with large clique size, an algorithm with linear computational complexity is developed by exploiting, using dynamic programming, the recursive chain structure of the ray clique. We further demonstrate the benefit of exact modeling and accurate estimation of the occlusion relationship by evaluating the proposed algorithm on several challenging data sets.

  15. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  16. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  17. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  18. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  19. Single-view volumetric PIV via high-resolution scanning, isotropic voxel restructuring and 3D least-squares matching (3D-LSM)

    NASA Astrophysics Data System (ADS)

    Brücker, C.; Hess, D.; Kitzhofer, J.

    2013-02-01

    Scanning PIV as introduced by Brücker (1995 Exp. Fluids 19 255-63, 1996a Appl. Sci. Res. 56 157-79) has been successfully applied in the last 20 years to different flow problems where the frame rate was sufficient to ensure a ‘frozen’ field condition. The limited number of parallel planes however leads typically to an under-sampling in the scan direction in depth; therefore, the spatial resolution in depth is typically considerably lower than the spatial resolution in the plane of the laser sheet (depth resolution = scan shift Δz ≫ pixel unit in object space). In addition, a partial volume averaging effect due to the thickness of the light sheet must be taken into account. Herein, the method is further developed using a high-resolution scanning in combination with a Gaussian regression technique to achieve an isotropic representation of the tracer particles in a voxel-based volume reconstruction with cuboidal voxels. This eliminates the partial volume averaging effect due to light sheet thickness and leads to comparable spatial resolution of the particle field reconstructions in x-, y- and z-axes. In addition, advantage of voxel-based processing with estimations of translation, rotation and shear/strain is taken by using a 3D least-squares matching method, well suited for reconstruction of grey-level pattern fields. The method is discussed in this paper and used to investigate the ring vortex instability at Re = 2500 within a measurement volume of roughly 75 × 75 × 50 mm3 with a spatial resolution of 100 µm/voxel (750 × 750 × 500 voxel elements). The volume has been scanned with a number of 100 light sheets and scan rates of 10 kHz. The results show the growth of the Tsai-Widnall azimuthal instabilities accompanied with a precession of the axis of the vortex ring. Prior to breakdown, secondary instabilities evolve along the core with streamwise oriented striations. The front stagnation point's streamwise distance to the core starts to decrease while

  20. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  1. Imaging the Juan de Fuca subduction plate using 3D Kirchoff Prestack Depth Migration

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Bodin, T.; Allen, R. M.; Tauzin, B.

    2014-12-01

    We propose a new Receiver Function migration method to image the subducting plate in the western US that utilizes the US array and regional network data. While the well-developed CCP (common conversion point) poststack migration is commonly used for such imaging; our method applies a 3D prestack depth migration approach. The traditional CCP and post-stack depth mapping approaches implement the ray tracing and moveout correction for the incoming teleseismic plane wave based on a 1D earth reference model and the assumption of horizontal discontinuities. Although this works well in mapping the reflection position of relatively flat discontinuities (such as the Moho or the LAB), CCP is known to give poor results in the presence of lateral volumetric velocity variations and dipping layers. Instead of making the flat layer assumption and 1D moveout correction, seismic rays are traced in a 3D tomographic model with the Fast Marching Method. With travel time information stored, our Kirchoff migration is done where the amplitude of the receiver function at a given time is distributed over all possible conversion points (i.e. along a semi-elipse) on the output migrated depth section. The migrated reflectors will appear where the semicircles constructively interfere, whereas destructive interference will cancel out noise. Synthetic tests show that in the case of a horizontal discontinuity, the prestack Kirchoff migration gives similar results to CCP, but without spurious multiples as this energy is stacked destructively and cancels out. For 45 degree and 60 degree dipping discontinuities, it also performs better in terms of imaging at the right boundary and dip angle. This is especially useful in the Western US case, beneath which the Juan de Fuca plate subducted to ~450km with a dipping angle that may exceed 50 degree. While the traditional CCP method will underestimate the dipping angle, our proposed imaging method will provide an accurate 3D subducting plate image without

  2. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  3. Rapidly-steered single-element ultrasound for real-time volumetric imaging and guidance

    NASA Astrophysics Data System (ADS)

    Stauber, Mark; Western, Craig; Solek, Roman; Salisbury, Kenneth; Hristov, Dmitre; Schlosser, Jeffrey

    2016-03-01

    Volumetric ultrasound (US) imaging has the potential to provide real-time anatomical imaging with high soft-tissue contrast in a variety of diagnostic and therapeutic guidance applications. However, existing volumetric US machines utilize "wobbling" linear phased array or matrix phased array transducers which are costly to manufacture and necessitate bulky external processing units. To drastically reduce cost, improve portability, and reduce footprint, we propose a rapidly-steered single-element volumetric US imaging system. In this paper we explore the feasibility of this system with a proof-of-concept single-element volumetric US imaging device. The device uses a multi-directional raster-scan technique to generate a series of two-dimensional (2D) slices that were reconstructed into three-dimensional (3D) volumes. At 15 cm depth, 90° lateral field of view (FOV), and 20° elevation FOV, the device produced 20-slice volumes at a rate of 0.8 Hz. Imaging performance was evaluated using an US phantom. Spatial resolution was 2.0 mm, 4.7 mm, and 5.0 mm in the axial, lateral, and elevational directions at 7.5 cm. Relative motion of phantom targets were automatically tracked within US volumes with a mean error of -0.3+/-0.3 mm, -0.3+/-0.3 mm, and -0.1+/-0.5 mm in the axial, lateral, and elevational directions, respectively. The device exhibited a mean spatial distortion error of 0.3+/-0.9 mm, 0.4+/-0.7 mm, and -0.3+/-1.9 in the axial, lateral, and elevational directions. With a production cost near $1000, the performance characteristics of the proposed system make it an ideal candidate for diagnostic and image-guided therapy applications where form factor and low cost are paramount.

  4. Reducing uncertainties in volumetric image based deformable organ registration.

    PubMed

    Liang, J; Yan, D

    2003-08-01

    Applying volumetric image feedback in radiotherapy requires image based deformable organ registration. The foundation of this registration is the ability of tracking subvolume displacement in organs of interest. Subvolume displacement can be calculated by applying biomechanics model and the finite element method to human organs manifested on the multiple volumetric images. The calculation accuracy, however, is highly dependent on the determination of the corresponding organ boundary points. Lacking sufficient information for such determination, uncertainties are inevitable-thus diminishing the registration accuracy. In this paper, a method of consuming energy minimization was developed to reduce these uncertainties. Starting from an initial selection of organ boundary point correspondence on volumetric image sets, the subvolume displacement and stress distribution of the whole organ are calculated and the consumed energy due to the subvolume displacements is computed accordingly. The corresponding positions of the initially selected boundary points are then iteratively optimized to minimize the consuming energy under geometry and stress constraints. In this study, a rectal wall delineated from patient CT image was artificially deformed using a computer simulation and utilized to test the optimization. Subvolume displacements calculated based on the optimized boundary point correspondence were compared to the true displacements, and the calculation accuracy was thereby evaluated. Results demonstrate that a significant improvement on the accuracy of the deformable organ registration can be achieved by applying the consuming energy minimization in the organ deformation calculation.

  5. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  6. Extended Kalman filtering for continuous volumetric MR-temperature imaging.

    PubMed

    Denis de Senneville, Baudouin; Roujol, Sébastien; Hey, Silke; Moonen, Chrit; Ries, Mario

    2013-04-01

    Real time magnetic resonance (MR) thermometry has evolved into the method of choice for the guidance of high-intensity focused ultrasound (HIFU) interventions. For this role, MR-thermometry should preferably have a high temporal and spatial resolution and allow observing the temperature over the entire targeted area and its vicinity with a high accuracy. In addition, the precision of real time MR-thermometry for therapy guidance is generally limited by the available signal-to-noise ratio (SNR) and the influence of physiological noise. MR-guided HIFU would benefit of the large coverage volumetric temperature maps, including characterization of volumetric heating trajectories as well as near- and far-field heating. In this paper, continuous volumetric MR-temperature monitoring was obtained as follows. The targeted area was continuously scanned during the heating process by a multi-slice sequence. Measured data and a priori knowledge of 3-D data derived from a forecast based on a physical model were combined using an extended Kalman filter (EKF). The proposed reconstruction improved the temperature measurement resolution and precision while maintaining guaranteed output accuracy. The method was evaluated experimentally ex vivo on a phantom, and in vivo on a porcine kidney, using HIFU heating. On the in vivo experiment, it allowed the reconstruction from a spatio-temporally under-sampled data set (with an update rate for each voxel of 1.143 s) to a 3-D dataset covering a field of view of 142.5×285×54 mm(3) with a voxel size of 3×3×6 mm(3) and a temporal resolution of 0.127 s. The method also provided noise reduction, while having a minimal impact on accuracy and latency.

  7. Free segmentation in rendered 3D images through synthetic impulse response in integral imaging

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.

    2016-06-01

    Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.

  8. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    DTIC Science & Technology

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  9. Robust Reconstruction and Generalized Dual Hahn Moments Invariants Extraction for 3D Images

    NASA Astrophysics Data System (ADS)

    Mesbah, Abderrahim; Zouhri, Amal; El Mallahi, Mostafa; Zenkouar, Khalid; Qjidaa, Hassan

    2017-03-01

    In this paper, we introduce a new set of 3D weighed dual Hahn moments which are orthogonal on a non-uniform lattice and their polynomials are numerically stable to scale, consequent, producing a set of weighted orthonormal polynomials. The dual Hahn is the general case of Tchebichef and Krawtchouk, and the orthogonality of dual Hahn moments eliminates the numerical approximations. The computational aspects and symmetry property of 3D weighed dual Hahn moments are discussed in details. To solve their inability to invariability of large 3D images, which cause to overflow issues, a generalized version of these moments noted 3D generalized weighed dual Hahn moment invariants are presented where whose as linear combination of regular geometric moments. For 3D pattern recognition, a generalized expression of 3D weighted dual Hahn moment invariants, under translation, scaling and rotation transformations, have been proposed where a new set of 3D-GWDHMIs have been provided. In experimental studies, the local and global capability of free and noisy 3D image reconstruction of the 3D-WDHMs has been compared with other orthogonal moments such as 3D Tchebichef and 3D Krawtchouk moments using Princeton Shape Benchmark database. On pattern recognition using the 3D-GWDHMIs like 3D object descriptors, the experimental results confirm that the proposed algorithm is more robust than other orthogonal moments for pattern classification of 3D images with and without noise.

  10. Potential Cost Savings for Use of 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    DTIC Science & Technology

    2013-12-04

    pmlkploba=obmloq=pbofbp= = = Potential Cost Savings for Use of 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization...REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE Potential Cost Savings for Use of 3D Printing Combined With 3D ...oÉëÉ~êÅÜ=mêçÖê~ã= ëéçåëçêÉÇ=oÉéçêí=pÉêáÉë= Potential Cost Savings for Use of 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and

  11. Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms

    PubMed Central

    Bouchard, Matthew B.; Voleti, Venkatakaushik; Mendes, César S.; Lacefield, Clay; Grueber, Wesley B.; Mann, Richard S.; Bruno, Randy M.; Hillman, Elizabeth M. C.

    2014-01-01

    We report a new 3D microscopy technique that allows volumetric imaging of living samples at ultra-high speeds: Swept, confocally-aligned planar excitation (SCAPE) microscopy. While confocal and two-photon microscopy have revolutionized biomedical research, current implementations are costly, complex and limited in their ability to image 3D volumes at high speeds. Light-sheet microscopy techniques using two-objective, orthogonal illumination and detection require a highly constrained sample geometry, and either physical sample translation or complex synchronization of illumination and detection planes. In contrast, SCAPE microscopy acquires images using an angled, swept light-sheet in a single-objective, en-face geometry. Unique confocal descanning and image rotation optics map this moving plane onto a stationary high-speed camera, permitting completely translationless 3D imaging of intact samples at rates exceeding 20 volumes per second. We demonstrate SCAPE microscopy by imaging spontaneous neuronal firing in the intact brain of awake behaving mice, as well as freely moving transgenic Drosophila larvae. PMID:25663846

  12. Optimization of element length for imaging small volumetric reflectors with linear ultrasonic arrays

    NASA Astrophysics Data System (ADS)

    Barber, T. S.; Wilcox, P. D.; Nixon, A. D.

    2016-02-01

    A 3D ultrasonic simulation study is presented, aimed at understanding the effect of element length for imaging small volumetric flaws with linear arrays in ultrasonically noisy materials. The geometry of a linear array can be described by the width, pitch and total number of the elements along with the length perpendicular to imaging plane. This paper is concerned with the latter parameter, which tends to be ignored in array optimization studies and is often chosen arbitrarily for industrial array inspections. A 3D analytical model based on imaging a point target is described, validated and used to make calculations of relative Signal-to-Noise Ratio (SNR) as a function of element length. SNR is found to be highly sensitive to element length with a 12dB variation observed over the length range investigated. It is then demonstrated that the optimal length can be predicted directly from the Point Spread Function (PSF) of the imaging system as well as the natural focal point of the array element from 2D beam profiles perpendicular to the imaging plane. This result suggests that the optimal length for any imaging position can be predicted without the need for a full 3D model and is independent of element pitch and the number of elements. Array element design guidelines are then described with respect to wavelength and extensions of these results are discussed for application to realistically-sized defects and coarse-grained materials.

  13. Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms.

    PubMed

    Bouchard, Matthew B; Voleti, Venkatakaushik; Mendes, César S; Lacefield, Clay; Grueber, Wesley B; Mann, Richard S; Bruno, Randy M; Hillman, Elizabeth M C

    2015-02-01

    We report a new 3D microscopy technique that allows volumetric imaging of living samples at ultra-high speeds: Swept, confocally-aligned planar excitation (SCAPE) microscopy. While confocal and two-photon microscopy have revolutionized biomedical research, current implementations are costly, complex and limited in their ability to image 3D volumes at high speeds. Light-sheet microscopy techniques using two-objective, orthogonal illumination and detection require a highly constrained sample geometry, and either physical sample translation or complex synchronization of illumination and detection planes. In contrast, SCAPE microscopy acquires images using an angled, swept light-sheet in a single-objective, en-face geometry. Unique confocal descanning and image rotation optics map this moving plane onto a stationary high-speed camera, permitting completely translationless 3D imaging of intact samples at rates exceeding 20 volumes per second. We demonstrate SCAPE microscopy by imaging spontaneous neuronal firing in the intact brain of awake behaving mice, as well as freely moving transgenic Drosophila larvae.

  14. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  15. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  16. Imaging 3D strain field monitoring during hydraulic fracturing processes

    NASA Astrophysics Data System (ADS)

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  17. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice.

  18. Obscuring surface anatomy in volumetric imaging data.

    PubMed

    Milchenko, Mikhail; Marcus, Daniel

    2013-01-01

    The identifying or sensitive anatomical features in MR and CT images used in research raise patient privacy concerns when such data are shared. In order to protect human subject privacy, we developed a method of anatomical surface modification and investigated the effects of such modification on image statistics and common neuroimaging processing tools. Common approaches to obscuring facial features typically remove large portions of the voxels. The approach described here focuses on blurring the anatomical surface instead, to avoid impinging on areas of interest and hard edges that can confuse processing tools. The algorithm proceeds by extracting a thin boundary layer containing surface anatomy from a region of interest. This layer is then "stretched" and "flattened" to fit into a thin "box" volume. After smoothing along a plane roughly parallel to anatomy surface, this volume is transformed back onto the boundary layer of the original data. The above method, named normalized anterior filtering, was coded in MATLAB and applied on a number of high resolution MR and CT scans. To test its effect on automated tools, we compared the output of selected common skull stripping and MR gain field correction methods used on unmodified and obscured data. With this paper, we hope to improve the understanding of the effect of surface deformation approaches on the quality of de-identified data and to provide a useful de-identification tool for MR and CT acquisitions.

  19. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  20. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  1. Feasibility of Multi-Plane-Transmit Beamforming for Real-time Volumetric Cardiac Imaging: A Simulation Study.

    PubMed

    Chen, Yinran; Tong, Ling; Ortega, Alejandra; Luo, Jianwen; D'hooge, Jan

    2017-01-10

    Today's three-dimensional (3-D) cardiac ultrasound imaging systems suffer from relatively low spatial and temporal resolution, limiting their applicability in daily clinical practice. To address this problem, 3-D diverging wave imaging with spatial coherent compounding (DWC) as well as 3-D multi-line-transmit (MLT) imaging have recently been proposed. Currently, the former improves the temporal resolution significantly at the expense of image quality and the risk of introducing motion artifacts whereas the latter only provides a moderate gain in volume rate but mostly preserves quality. In this study, a new technique for real-time volumetric cardiac imaging is proposed by combining the strengths of both approaches. Hereto, multiple planar (i.e., 2-D) diverging waves are simultaneously transmitted in order to scan the 3-D volume, i.e., multi-plane-transmit (MPT) beamforming. The performance of a 3MPT imaging system was contrasted to that of a 3-D DWC system and that of a 3-D MLT system by computer simulations during both static and moving conditions of the target structures while operating at similar volume rate. It was demonstrated that for stationary targets, the 3MPT imaging system was competitive with both the 3-D DWC and 3-D MLT systems in terms of spatial resolution and side lobe levels (i.e., image quality). However, for moving targets, the image quality quickly deteriorated for the 3-D DWC systems while it remained stable for the 3MPT system while operating at twice the volume rate of the 3D-MLT system. The proposed MPT beamforming approach was thus demonstrated to be feasible and competitive to state-of-the-art methodologies.

  2. Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2013-03-01

    In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.

  3. Lensfree diffractive tomography for the imaging of 3D cell cultures

    PubMed Central

    Momey, F.; Berdeu, A.; Bordy, T.; Dinten, J.-M.; Marcel, F. Kermarrec; Picollet-D’hahan, N.; Gidrol, X.; Allier, C.

    2016-01-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm3 of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  4. High sensitive volumetric imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography

    NASA Astrophysics Data System (ADS)

    Zhi, Zhongwei; Jung, Yeongri; Jia, Yali; An, Lin; Wang, Ruikang K.

    2011-03-01

    We present a non-invasive, label-free imaging technique called Ultrahigh Sensitive Optical Microangiography (UHSOMAG) for high sensitive volumetric imaging of renal microcirculation. The UHS-OMAG imaging system is based on spectral domain optical coherence tomography (SD-OCT), which uses a 47000 A-line scan rate CCD camera to perform an imaging speed of 150 frames per second that takes only ~7 seconds to acquire a 3D image. The technique, capable of measuring slow blood flow down to 4 um/s, is sensitive enough to image capillary networks, such as peritubular capillaries and glomerulus within renal cortex. We show superior performance of UHS-OMAG in providing depthresolved volumetric images of rich renal microcirculation. We monitored the dynamics of renal microvasculature during renal ischemia and reperfusion. Obvious reduction of renal microvascular density due to renal ischemia was visualized and quantitatively analyzed. This technique can be helpful for the assessment of chronic kidney disease (CKD) which relates to abnormal microvasculature.

  5. Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization.

    PubMed

    Sato, Y; Nakamoto, M; Tamaki, Y; Sasama, T; Sakita, I; Nakajima, Y; Monden, M; Tamura, S

    1998-10-01

    This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data.

  6. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  7. Volumetric sub-surface imaging using spectrally encoded endoscopy.

    PubMed

    Yelin, D; Bouma, B E; Tearney, G J

    2008-02-04

    Endoscopic imaging below tissue surfaces and through turbid media may provide improved diagnostic capabilities and visibility in surgical settings. Spectrally encoded endoscopy (SEE) is a recently developed method that utilizes a single optical fiber, miniature optics and a diffractive grating for high-speed imaging through small diameter, flexible endoscopic probes. SEE has also been shown to provide three-dimensional topological imaging capabilities. In this paper, we have configured SEE to additionally image beneath tissue surfaces, by increasing the system's sensitivity and acquiring the complex spectral density for each spectrally resolved point on the sample. In order to demonstrate the capability of SEE to obtain subsurface information, we have utilized the system to image a resolution target through intralipid solution, and conduct volumetric imaging of a mouse embryo and excised human middle-ear ossicles. Our results demonstrate that real-time subsurface imaging is possible with this miniature endoscopy technique.

  8. The effect of CT scanner parameters and 3D volume rendering techniques on the accuracy of linear, angular, and volumetric measurements of the mandible

    PubMed Central

    Whyms, B.J.; Vorperian, H.K.; Gentry, L.R.; Schimek, E.M.; Bersu, E.T.; Chung, M.K.

    2013-01-01

    Objectives This study investigates the effect of scanning parameters on the accuracy of measurements from three-dimensional multi-detector computed tomography (3D-CT) mandible renderings. A broader range of acceptable parameters can increase the availability of CT studies for retrospective analysis. Study Design Three human mandibles and a phantom object were scanned using 18 combinations of slice thickness, field of view, and reconstruction algorithm and three different threshold-based segmentations. Measurements of 3D-CT models and specimens were compared. Results Linear and angular measurements were accurate, irrespective of scanner parameters or rendering technique. Volume measurements were accurate with a slice thickness of 1.25 mm, but not 2.5 mm. Surface area measurements were consistently inflated. Conclusions Linear, angular and volumetric measurements of mandible 3D-CT models can be confidently obtained from a range of parameters and rendering techniques. Slice thickness is the primary factor affecting volume measurements. These findings should also apply to 3D rendering using cone-beam-CT. PMID:23601224

  9. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  10. 3D imaging of telomeres and nuclear architecture: An emerging tool of 3D nano-morphology-based diagnosis.

    PubMed

    Knecht, Hans; Mai, Sabine

    2011-04-01

    Patient samples are evaluated by experienced pathologists whose diagnosis guides treating physicians. Pathological diagnoses are complex and often assisted by the application of specific tissue markers. However, cases still exist where pathologists cannot distinguish between closely related entities or determine the aggressiveness of the disease they identify under the microscope. This is due to the absence of reliable markers that define diagnostic subgroups in several cancers. Three-dimensional (3D) imaging of nuclear telomere signatures is emerging as a new tool that may change this situation offering new opportunities to the patients. This article will review current and future avenues in the assessment of diagnostic patient samples.

  11. Dual-mode intracranial catheter integrating 3D ultrasound imaging and hyperthermia for neuro-oncology: feasibility study.

    PubMed

    Herickhoff, Carl D; Light, Edward D; Bing, Kristin F; Mukundan, Srinivasan; Grant, Gerald A; Wolf, Patrick D; Smith, Stephen W

    2009-04-01

    In this study, we investigated the feasibility of an intracranial catheter transducer with dual-mode capability of real-time 3D (RT3D) imaging and ultrasound hyperthermia, for application in the visualization and treatment of tumors in the brain. Feasibility is demonstrated in two ways: first by using a 50-element linear array transducer (17 mm x 3.1 mm aperture) operating at 4.4 MHz with our Volumetrics diagnostic scanner and custom, electrical impedance-matching circuits to achieve a temperature rise over 4 degrees C in excised pork muscle, and second, by designing and constructing a 12 Fr, integrated matrix and linear-array catheter transducer prototype for combined RT3D imaging and heating capability. This dual-mode catheter incorporated 153 matrix array elements and 11 linear array elements diced on a 0.2 mm pitch, with a total aperture size of 8.4 mm x 2.3 mm. This 3.64 MHz array achieved a 3.5 degrees C in vitro temperature rise at a 2 cm focal distance in tissue-mimicking material. The dual-mode catheter prototype was compared with a Siemens 10 Fr AcuNav catheter as a gold standard in experiments assessing image quality and therapeutic potential and both probes were used in an in vivo canine brain model to image anatomical structures and color Doppler blood flow and to attempt in vivo heating.

  12. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    NASA Astrophysics Data System (ADS)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  13. Automated bone segmentation from large field of view 3D MR images of the hip joint

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  14. 3D MR image denoising using rough set and kernel PCA method.

    PubMed

    Phophalia, Ashish; Mitra, Suman K

    2017-02-01

    In this paper, we have presented a two stage method, using kernel principal component analysis (KPCA) and rough set theory (RST), for denoising volumetric MRI data. A rough set theory (RST) based clustering technique has been used for voxel based processing. The method groups similar voxels (3D cubes) using class and edge information derived from noisy input. Each clusters thus formed now represented via basis vector. These vectors now projected into kernel space and PCA is performed in the feature space. This work is motivated by idea that under Rician noise MRI data may be non-linear and kernel mapping will help to define linear separator between these clusters/basis vectors thus used for image denoising. We have further investigated various kernels for Rician noise for different noise levels. The best kernel is then selected on the performance basis over PSNR and structure similarity (SSIM) measures. The work has been compared with state-of-the-art methods under various measures for synthetic and real databases.

  15. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  16. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  17. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  18. 3-D Velocity Measurement of Natural Convection Using Image Processing

    NASA Astrophysics Data System (ADS)

    Shinoki, Masatoshi; Ozawa, Mamoru; Okada, Toshifumi; Kimura, Ichiro

    This paper describes quantitative three-dimensional measurement method for flow field of a rotating Rayleigh-Benard convection in a cylindrical cell heated below and cooled above. A correlation method for two-dimensional measurement was well advanced to a spatio-temporal correlation method. Erroneous vectors, often appeared in the correlation method, was successfully removed using Hopfield neural network. As a result, calculated 3-D velocity vector distribution well corresponded to the observed temperature distribution. Consequently, the simultaneous three-dimensional measurement system for temperature and flow field was developed.

  19. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  20. In-vivo Optical Tomography of Small Scattering Specimens: time-lapse 3D imaging of the head eversion process in Drosophila melanogaster

    PubMed Central

    Arranz, Alicia; Dong, Di; Zhu, Shouping; Savakis, Charalambos; Tian, Jie; Ripoll, Jorge

    2014-01-01

    Even though in vivo imaging approaches have witnessed several new and important developments, specimens that exhibit high light scattering properties such as Drosophila melanogaster pupae are still not easily accessible with current optical imaging techniques, obtaining images only from subsurface features. This means that in order to obtain 3D volumetric information these specimens need to be studied either after fixation and a chemical clearing process, through an imaging window - thus perturbing physiological development -, or during early stages of development when the scattering contribution is negligible. In this paper we showcase how Optical Projection Tomography may be used to obtain volumetric images of the head eversion process in vivo in Drosophila melanogaster pupae, both in control and headless mutant specimens. Additionally, we demonstrate the use of Helical Optical Projection Tomography (hOPT) as a tool for high throughput 4D-imaging of several specimens simultaneously. PMID:25471694

  1. In-vivo optical tomography of small scattering specimens: time-lapse 3D imaging of the head eversion process in Drosophila melanogaster.

    PubMed

    Arranz, Alicia; Dong, Di; Zhu, Shouping; Savakis, Charalambos; Tian, Jie; Ripoll, Jorge

    2014-12-04

    Even though in vivo imaging approaches have witnessed several new and important developments, specimens that exhibit high light scattering properties such as Drosophila melanogaster pupae are still not easily accessible with current optical imaging techniques, obtaining images only from subsurface features. This means that in order to obtain 3D volumetric information these specimens need to be studied either after fixation and a chemical clearing process, through an imaging window--thus perturbing physiological development -, or during early stages of development when the scattering contribution is negligible. In this paper we showcase how Optical Projection Tomography may be used to obtain volumetric images of the head eversion process in vivo in Drosophila melanogaster pupae, both in control and headless mutant specimens. Additionally, we demonstrate the use of Helical Optical Projection Tomography (hOPT) as a tool for high throughput 4D-imaging of several specimens simultaneously.

  2. Dual-Color 3D Superresolution Microscopy by Combined Spectral-Demixing and Biplane Imaging

    PubMed Central

    Winterflood, Christian M.; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-01-01

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. PMID:26153696

  3. 3D fluorescence anisotropy imaging using selective plane illumination microscopy

    PubMed Central

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-01-01

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  4. Reconstruction of 3d Digital Image of Weepingforsythia Pollen

    NASA Astrophysics Data System (ADS)

    Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina

    Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).

  5. In vivo volumetric imaging of subcutaneous microvasculature by photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Hao F.; Maslov, Konstantin; Li, Meng-Lin; Stoica, George; Wang, Lihong V.

    2006-10-01

    Photoacoustic microscopy was developed to achieve volumetric imaging of the anatomy and functions of the subcutaneous microvasculature in both small animals and humans in vivo with high spatial resolution and high signal-to-background ratio. By following the skin contour in raster scanning, the ultrasonic transducer maintains focusing in the region of interest. Furthermore, off-focus lateral resolution is improved by using a synthetic-aperture focusing technique based on the virtual point detector concept. Structural images are acquired in both rats and humans, whereas functional images representing hemoglobin oxygen saturation are acquired in rats. After multiscale vesselness filtering, arterioles and venules in the image are separated based on the imaged oxygen saturation levels. Detailed structural information, such as vessel depth and spatial orientation, are revealed by volume rendering.

  6. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  7. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  8. Infrared imaging of the polymer 3D-printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  9. Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine.

    PubMed

    Lee, S; Lee, J; Lee, A; Park, N; Lee, S; Song, S; Seo, A; Lee, H; Kim, J-I; Eom, K

    2013-05-01

    Augmented reality (AR) is a technology which enables users to see the real world, with virtual objects superimposed upon or composited with it. AR simulators have been developed and used in human medicine, but not in veterinary medicine. The aim of this study was to develop an AR intravenous (IV) injection simulator to train veterinary and pre-veterinary students to perform canine venipuncture. Computed tomographic (CT) images of a beagle dog were scanned using a 64-channel multidetector. The CT images were transformed into volumetric data sets using an image segmentation method and were converted into a stereolithography format for creating 3D models. An AR-based interface was developed for an AR simulator for IV injection. Veterinary and pre-veterinary student volunteers were randomly assigned to an AR-trained group or a control group trained using more traditional methods (n = 20/group; n = 8 pre-veterinary students and n = 12 veterinary students in each group) and their proficiency at IV injection technique in live dogs was assessed after training was completed. Students were also asked to complete a questionnaire which was administered after using the simulator. The group that was trained using an AR simulator were more proficient at IV injection technique using real dogs than the control group (P ≤ 0.01). The students agreed that they learned the IV injection technique through the AR simulator. Although the system used in this study needs to be modified before it can be adopted for veterinary educational use, AR simulation has been shown to be a very effective tool for training medical personnel. Using the technology reported here, veterinary AR simulators could be developed for future use in veterinary education.

  10. An attempt at 3-D imaging of a small domain (Almeria, southern Spain)

    NASA Astrophysics Data System (ADS)

    Badal, J.; Sabadell, F. J.; Serón, F. J.

    2000-11-01

    When applying a methodology for obtaining the 3-D shear-wave velocity structure of a medium from surface wave dispersion data, the problem must be considered with caution since one inverts path-averaged velocities and the use of any inversion method entails some drawbacks such as lack of uniqueness, unwarranted stability and constraints affecting the data. Several imaging techniques aimed at volumetric modeling and the visualization of data can be used to overcome these drawbacks. Actually, some spatial prediction techniques are especially useful for analyzing short-range variability between scattered points. We use here a pathwise reconstruction by means of an algorithm that, from a mathematical viewpoint, can be understood through the application of the orthogonal projection theorem onto convex sets (POCS). In particular, we are interested in exploring the possibilities of a POCS algorithm operating on a very unfavorable case constrained by a lack of available data. In this paper, we have tackled a small-sized problem and we present the results based on ray-path seismic velocities that we have obtained in the case of a sparsely sampled study area like Almeria (southeastern Spain) by way of tomographic images obtained by application of such an algorithm. The main goal of this procedure is the reconstruction of the very shallow Rg-wave velocity structure of a small domain strongly constrained by the data. The method has allowed us to examine the sharply contrasting geology between neighboring geological formations. Although the relationship between lateral changes in Rg-wave dispersion and geologic structure may not be straightforward, we have observed a correlation between the velocity structure of very shallow soils and the local geology at surface. The good agreement between our results and the field observations prove the versatility of the method and the reliability of the imaging.

  11. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  12. Ellipsoid Segmentation Model for Analyzing Light-Attenuated 3D Confocal Image Stacks of Fluorescent Multi-Cellular Spheroids

    PubMed Central

    Barbier, Michaël; Jaensch, Steffen; Cornelissen, Frans; Vidic, Suzana; Gjerde, Kjersti; de Hoogt, Ronald; Graeser, Ralph; Gustin, Emmanuel; Chong, Yolanda T.

    2016-01-01

    In oncology, two-dimensional in-vitro culture models are the standard test beds for the discovery and development of cancer treatments, but in the last decades, evidence emerged that such models have low predictive value for clinical efficacy. Therefore they are increasingly complemented by more physiologically relevant 3D models, such as spheroid micro-tumor cultures. If suitable fluorescent labels are applied, confocal 3D image stacks can characterize the structure of such volumetric cultures and, for example, cell proliferation. However, several issues hamper accurate analysis. In particular, signal attenuation within the tissue of the spheroids prevents the acquisition of a complete image for spheroids over 100 micrometers in diameter. And quantitative analysis of large 3D image data sets is challenging, creating a need for methods which can be applied to large-scale experiments and account for impeding factors. We present a robust, computationally inexpensive 2.5D method for the segmentation of spheroid cultures and for counting proliferating cells within them. The spheroids are assumed to be approximately ellipsoid in shape. They are identified from information present in the Maximum Intensity Projection (MIP) and the corresponding height view, also known as Z-buffer. It alerts the user when potential bias-introducing factors cannot be compensated for and includes a compensation for signal attenuation. PMID:27303813

  13. Scanning laser optical tomography: a highly efficient volumetric imaging technique for mesoscopic specimens

    NASA Astrophysics Data System (ADS)

    Meyer, H.; Antonopoulos, G.; Heidrich, M.; Lorbeer, R.-A.; Kellner, M.; Winkel, A.; Stiesch, M.; Kühnel, M. P.; Ochs, M.; Ripken, T.

    2013-06-01

    Imaging of biological samples necessitates high requirements on multi modal 3D imaging techniques. Lately, the range of application fields has extended from transparent biological samples up to biological compartments on intransparent objects. We introduce SLOT as an innovative and highly efficient tool for multi modal visualization by intrinsic and extrinsic contrast mechanisms in biological model organisms with sizes up to several millimeters. One aim is the exploration of SLOTs capability to image organs of biological model organisms. Therefore, intrinsic contrast mechanisms were addressed regarding their ability for visualizing and quantitating structural details within the murine lung. Additionally we present SLOT as a valuable tool for the in vitro structural and volumetric large scale investigation of biofilm formation on implants with sizes up to several millimeters.

  14. Fully automatic segmentation of the mitral leaflets in 3D transesophageal echocardiographic images using multi-atlas joint label fusion and deformable medial modeling.

    PubMed

    Pouch, A M; Wang, H; Takabe, M; Jackson, B M; Gorman, J H; Gorman, R C; Yushkevich, P A; Sehgal, C M

    2014-01-01

    Comprehensive visual and quantitative analysis of in vivo human mitral valve morphology is central to the diagnosis and surgical treatment of mitral valve disease. Real-time 3D transesophageal echocardiography (3D TEE) is a practical, highly informative imaging modality for examining the mitral valve in a clinical setting. To facilitate visual and quantitative 3D TEE image analysis, we describe a fully automated method for segmenting the mitral leaflets in 3D TEE image data. The algorithm integrates complementary probabilistic segmentation and shape modeling techniques (multi-atlas joint label fusion and deformable modeling with continuous medial representation) to automatically generate 3D geometric models of the mitral leaflets from 3D TEE image data. These models are unique in that they establish a shape-based coordinate system on the valves of different subjects and represent the leaflets volumetrically, as structures with locally varying thickness. In this work, expert image analysis is the gold standard for evaluating automatic segmentation. Without any user interaction, we demonstrate that the automatic segmentation method accurately captures patient-specific leaflet geometry at both systole and diastole in 3D TEE data acquired from a mixed population of subjects with normal valve morphology and mitral valve disease.

  15. Retrospective evaluation of dosimetric quality for prostate carcinomas treated with 3D conformal, intensity modulated and volumetric modulated arc radiotherapy

    SciTech Connect

    Crowe, Scott B; Kairn, Tanya; Middlebrook, Nigel; Hill, Brendan; Christie, David R H; Knight, Richard T; Kenny, John; Langton, Christian M; Trapp, Jamie V

    2013-12-15

    This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across five centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity modulated radiotherapy (IMRT) and 47 treated with volumetric modulated arc therapy (VMAT). Treatment plan quality was evaluated in terms of target dose homogeneity and organs at risk (OAR), through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each OAR. Statistical significance was evaluated using two-tailed Welch's T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the OAR: with increased compliance with recommended OAR dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.

  16. Retrospective evaluation of dosimetric quality for prostate carcinomas treated with 3D conformal, intensity modulated and volumetric modulated arc radiotherapy

    PubMed Central

    Crowe, Scott B; Kairn, Tanya; Middlebrook, Nigel; Hill, Brendan; Christie, David R H; Knight, Richard T; Kenny, John; Langton, Christian M; Trapp, Jamie V

    2013-01-01

    Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across five centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity modulated radiotherapy (IMRT) and 47 treated with volumetric modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organs at risk (OAR), through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each OAR. Statistical significance was evaluated using two-tailed Welch's T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the OAR: with increased compliance with recommended OAR dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT. PMID:26229621

  17. Research in Image Understanding as Applied to 3-D Microwave Tomographic Imaging with Near Optical Resolution.

    DTIC Science & Technology

    1986-03-10

    Severe Clutter .... ........ 1I-i III . Optical Implementation of the HopfieldModel .I -? .- . ." Model........................ . . BY...can be employed in future broad-band imaging radar networks capable of providing 3-D projective or . - tomographic images of remote aerospace targets...We expect the results of this effort to tell us how to achieve centimeter resolution on remote aerospace objects cost-effectively using microwave

  18. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  19. Fast non local means denoising for 3D MR images.

    PubMed

    Coupé, Pierrick; Yger, Pierre; Barillot, Christian

    2006-01-01

    One critical issue in the context of image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image conspicuity and to improve the performances of all the processings needed for quantitative imaging analysis. The method proposed in this paper is based on an optimized version of the Non Local (NL) Means algorithm. This approach uses the natural redundancy of information in image to remove the noise. Tests were carried out on synthetic datasets and on real 3T MR images. The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation.

  20. 3D integral imaging using diffractive Fresnel lens arrays.

    PubMed

    Hain, Mathias; von Spiegel, Wolff; Schmiedchen, Marc; Tschudi, Theo; Javidi, Bahram

    2005-01-10

    We present experimental results with binary amplitude Fresnel lens arrays and binary phase Fresnel lens arrays used to implement integral imaging systems. Their optical performance is compared with high quality refractive microlens arrays and pinhole arrays in terms of image quality, color distortion and contrast. Additionally, we show the first experimental results of lens arrays with different focal lengths in integral imaging, and discuss their ability to simultaneously increase both the depth of focus and the field of view.

  1. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  2. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed.

  3. A review of 3D/2D registration methods for image-guided interventions.

    PubMed

    Markelj, P; Tomaževič, D; Likar, B; Pernuš, F

    2012-04-01

    Registration of pre- and intra-interventional data is one of the key technologies for image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy, and interventional radiology. In this paper, we survey those 3D/2D data registration methods that utilize 3D computer tomography or magnetic resonance images as the pre-interventional data and 2D X-ray projection images as the intra-interventional data. The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration.

  4. Volumetric elasticity imaging with a 2-D CMUT array.

    PubMed

    Fisher, Ted G; Hall, Timothy J; Panda, Satchi; Richards, Michael S; Barbone, Paul E; Jiang, Jingfeng; Resnick, Jeff; Barnes, Steve

    2010-06-01

    This article reports the use of a two-dimensional (2-D) capacitive micro-machined ultrasound transducer (CMUT) to acquire radio-frequency (RF) echo data from relatively large volumes of a simple ultrasound phantom to compare three-dimensional (3-D) elasticity imaging methods. Typical 2-D motion tracking for elasticity image formation was compared with three different methods of 3-D motion tracking, with sum-squared difference (SSD) used as the similarity measure. Differences among the algorithms were the degree to which they tracked elevational motion: not at all (2-D search), planar search, combination of multiple planes and plane independent guided search. The cross-correlation between the predeformation and motion-compensated postdeformation RF echo fields was used to quantify motion tracking accuracy. The lesion contrast-to-noise ratio was used to quantify image quality. Tracking accuracy and strain image quality generally improved with increased tracking sophistication. When used as input for a 3-D modulus reconstruction, high quality 3-D displacement estimates yielded accurate and low noise modulus reconstruction.

  5. Volumetric Elasticity Imaging with a 2D CMUT Array

    PubMed Central

    Fisher, Ted G.; Hall, Timothy J.; Panda, Satchi; Richards, Michael S.; Barbone, Paul E.; Jiang, Jingfeng; Resnick, Jeff; Barnes, Steve

    2010-01-01

    This paper reports the use of a two-dimensional (2D) capacitive micro-machined ultrasound transducer (CMUT) to acquire radio frequency (RF) echo data from relatively large volumes of a simple ultrasound phantom to compare 3D elasticity imaging methods. Typical 2D motion tracking for elasticity image formation was compared to three different methods of 3D motion tracking, with sum-squared difference (SSD) used as the similarity measure. Differences among the algorithms were the degree to which they tracked elevational motion: not at all (2D search), planar search, combination of multiple planes, and plane independent guided search. The cross correlation between the pre-deformation and motion-compensated post-deformation RF echo fields was used to quantify motion tracking accuracy. The lesion contrast-to-noise ratio was used to quantify image quality. Tracking accuracy and strain image quality generally improved with increased tracking sophistication. When used as input for a 3D modulus reconstruction, high quality 3D displacement estimates yielded accurate and low noise modulus reconstruction. PMID:20510188

  6. Bi-sided integral imaging with 2D/3D convertibility using scattering polarizer.

    PubMed

    Yeom, Jiwoon; Hong, Keehoon; Park, Soon-gi; Hong, Jisoo; Min, Sung-Wook; Lee, Byoungho

    2013-12-16

    We propose a two-dimensional (2D) and three-dimensional (3D) convertible bi-sided integral imaging. The proposed system uses the polarization state of projected light for switching its operation mode between 2D and 3D modes. By using an optical module composed of two scattering polarizers and one linear polarizer, the proposed integral imaging system simultaneously provides 3D images with 2D background images for observers who are located in the front and the rear sides of the system. The occlusion effect between 2D images and 3D images is realized by using a compensation mask for 2D images and the elemental images. The principle of proposed system is experimentally verified.

  7. Edge structure preserving 3D image denoising by local surface approximation.

    PubMed

    Qiu, Peihua; Mukherjee, Partha Sarathi

    2012-08-01

    In various applications, including magnetic resonance imaging (MRI) and functional MRI (fMRI), 3D images are becoming increasingly popular. To improve the reliability of subsequent image analyses, 3D image denoising is often a necessary preprocessing step, which is the focus of the current paper. In the literature, most existing image denoising procedures are for 2D images. Their direct extensions to 3D cases generally cannot handle 3D images efficiently because the structure of a typical 3D image is substantially more complicated than that of a typical 2D image. For instance, edge locations are surfaces in 3D cases which would be much more challenging to handle compared to edge curves in 2D cases. We propose a novel 3D image denoising procedure in this paper, based on local approximation of the edge surfaces using a set of surface templates. An important property of this method is that it can preserve edges and major edge structures (e.g., intersections of two edge surfaces and pointed corners). Numerical studies show that it works well in various applications.

  8. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer.

  9. Effect of anatomical backgrounds on detectability in volumetric cone beam CT images

    NASA Astrophysics Data System (ADS)

    Han, Minah; Park, Subok; Baek, Jongduk

    2016-03-01

    As anatomical noise is often a dominating factor affecting signal detection in medical imaging, we investigate the effects of anatomical backgrounds on signal detection in volumetric cone beam CT images. Signal detection performances are compared between transverse and longitudinal planes with either uniform or anatomical backgrounds. Sphere objects with diameters of 1mm, 5mm, 8mm, and 11mm are used as the signals. Three-dimensional (3D) anatomical backgrounds are generated using an anatomical noise power spectrum, 1/fβ, with β=3, equivalent to mammographic background [1]. The mean voxel value of the 3D anatomical backgrounds is used as an attenuation coefficient of the uniform background. Noisy projection data are acquired by the forward projection of the uniform and anatomical 3D backgrounds with/without sphere lesions and by the addition of quantum noise. Then, images are reconstructed by an FDK algorithm [2]. For each signal size, signal detection performances in transverse and longitudinal planes are measured by calculating the task SNR of a channelized Hotelling observer with Laguerre-Gauss channels. In the uniform background case, transverse planes yield higher task SNR values for all sphere diameters but 1mm. In the anatomical background case, longitudinal planes yield higher task SNR values for all signal diameters. The results indicate that it is beneficial to use longitudinal planes to detect spherical signals in anatomical backgrounds.

  10. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic.

  11. Review of three-dimensional (3D) surface imaging for oncoplastic, reconstructive and aesthetic breast surgery.

    PubMed

    O'Connell, Rachel L; Stevens, Roger J G; Harris, Paul A; Rusby, Jennifer E

    2015-08-01

    Three-dimensional surface imaging (3D-SI) is being marketed as a tool in aesthetic breast surgery. It has recently also been studied in the objective evaluation of cosmetic outcome of oncological procedures. The aim of this review is to summarise the use of 3D-SI in oncoplastic, reconstructive and aesthetic breast surgery. An extensive literature review was undertaken to identify published studies. Two reviewers independently screened all abstracts and selected relevant articles using specific inclusion criteria. Seventy two articles relating to 3D-SI for breast surgery were identified. These covered endpoints such as image acquisition, calculations and data obtainable, comparison of 3D and 2D imaging and clinical research applications of 3D-SI. The literature provides a favourable view of 3D-SI. However, evidence of its superiority over current methods of clinical decision making, surgical planning, communication and evaluation of outcome is required before it can be accepted into mainstream practice.

  12. Volumetric Real-Time Imaging Using a CMUT Ring Array

    PubMed Central

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N.; O’Donnell, Matthew; Sahn, David J.; Khuri-Yakub, Butrus T.

    2012-01-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods—flash, classic phased array (CPA), and synthetic phased array (SPA)—were used in the study. For SPA imaging, two techniques to improve the image quality—Hadamard coding and aperture weighting—were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming. PMID:22718870

  13. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    PubMed

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p < 0.05). Assigned from the morphology of patients' knees, the PSI represents the convergence of congruent designs with current personalized treatment tools. The PSI can achieve less extremity alignment and greater accuracy of prosthesis implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  14. Evaluation of stereoscopic 3D displays for image analysis tasks

    NASA Astrophysics Data System (ADS)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  15. Advances in automated 3-D image analyses of cell populations imaged by confocal microscopy.

    PubMed

    Ancin, H; Roysam, B; Dufresne, T E; Chestnut, M M; Ridder, G M; Szarowski, D H; Turner, J N

    1996-11-01

    Automated three-dimensional (3-D) image analysis methods are presented for rapid and effective analysis of populations of fluorescently labeled cells or nuclei in thick tissue sections that have been imaged three dimensionally using a confocal microscope. The methods presented here greatly improve upon our earlier work (Roysam et al.:J Microsc 173: 115-126, 1994). The principal advances reported are: algorithms for efficient data pre-processing and adaptive segmentation, effective handling of image anisotrophy, and fast 3-D morphological algorithms for separating overlapping or connected clusters utilizing image gradient information whenever available. A particular feature of this method is its ability to separate densely packed and connected clusters of cell nuclei. Some of the challenges overcome in this work include the efficient and effective handling of imaging noise, anisotrophy, and large variations in image parameters such as intensity, object size, and shape. The method is able to handle significant inter-cell, intra-cell, inter-image, and intra-image variations. Studies indicate that this method is rapid, robust, and adaptable. Examples were presented to illustrate the applicability of this approach to analyzing images of nuclei from densely packed regions in thick sections of rat liver, and brain that were labeled with a fluorescent Schiff reagent.

  16. [Change in condylar and mandibular morphology in juvenile idiopathic arthritis: cone beam volumetric imaging].

    PubMed

    Garagiola, Umberto; Mercatali, Lorenzo; Bellintani, Claudio; Fodor, Attila; Farronato, Giampietro; Lőrincz, Adám

    2013-03-01

    The aim of this study is to show the importance of Cone Beam Computerized Tomography to volumetrically quantify TMJ damage in patients with JIA, measuring condylar and mandibular real volumes. 34 children with temporomandibular involvement by Juvenile Idiopathic Arthritis were observed by Cone Beam Computerized Tomography. 4 were excluded because of several imaging noises. The mandible was isolated from others craniofacial structures; the whole mandibular volume and its components' volumes (condyle, ramus, hemibody, hemisymphysis on right side and on left side) has been calculated by a 3D volume rendering technique. The results show a highly significant statistical difference between affected side volumetric values versus normal side volumetric values above all on condyle region (P < 0.01), while they don't show any statistical differences between right side versus left side. The Cone Beam Computerized Tomography represents a huge improvement in understanding of the condyle and mandibular morphological changes, even in the early stages of the Juvenile Idiopathic Arthritis. The JIA can lead in children to temporomandibular joint damage with facial development and growth alterations.

  17. In vivo volumetric imaging of the human corneo-scleral limbus with spectral domain OCT

    PubMed Central

    Bizheva, Kostadinka; Hutchings, Natalie; Sorbara, Luigina; Moayed, Alireza A.; Simpson, Trefford

    2011-01-01

    The limbus is the structurally rich transitional region of tissue between the cornea on one side, and the sclera and conjunctiva on the other. This zone, among other things, contains nerves passing to the cornea, blood and lymph vasculature for oxygen and nutrient delivery and for waste, CO2 removal and drainage of the aqueous humour. In addition, the limbus contains stem cells responsible for the existence and healing of the corneal epithelium. Here we present 3D images of the healthy human limbus, acquired in vivo with a spectral domain optical coherence tomography system operating at 1060nm. Cross-sectional and volumetric images were acquired from temporal and nasal locations in the human limbus with ~3µm x 18µm (axial x lateral) resolution in biological tissue at the rate of 92,000 A-scans/s. The imaging enabled detailed mapping of the corneo-scleral tissue morphology, and visualization of structural details such as the Vogt palisades, the blood and lymph vasculature including the Schlemm’s canal and the trabecular meshwork, as well as corneal nerve fiber bundles. Non-invasive, volumetric, high resolution imaging reveals fine details of the normal human limbal structure, and promises to provide invaluable information about its changes in health and disease as well as during and after corneal surgery. PMID:21750758

  18. Image processing and 3D visualization in forensic pathologic examination

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1996-02-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

  19. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  20. IMPROMPTU: a system for automatic 3D medical image-analysis.

    PubMed

    Sundaramoorthy, G; Hoford, J D; Hoffman, E A; Higgins, W E

    1995-01-01

    The utility of three-dimensional (3D) medical imaging is hampered by difficulties in extracting anatomical regions and making measurements in 3D images. Presently, a user is generally forced to use time-consuming, subjective, manual methods, such as slice tracing and region painting, to define regions of interest. Automatic image-analysis methods can ameliorate the difficulties of manual methods. This paper describes a graphical user interface (GUI) system for constructing automatic image-analysis processes for 3D medical-imaging applications. The system, referred to as IMPROMPTU, provides a user-friendly environment for prototyping, testing and executing complex image-analysis processes. IMPROMPTU can stand alone or it can interact with an existing graphics-based 3D medical image-analysis package (VIDA), giving a strong environment for 3D image-analysis, consisting of tools for visualization, manual interaction, and automatic processing. IMPROMPTU links to a large library of 1D, 2D, and 3D image-processing functions, referred to as VIPLIB, but a user can easily link in custom-made functions. 3D applications of the system are given for left-ventricular chamber, myocardial, and upper-airway extractions.

  1. Segmentation and interpretation of 3D protein images

    SciTech Connect

    Leherte, L.; Baxter, K.; Glasgow, J.; Fortier, S.

    1994-12-31

    The segmentation and interpretation of three-dimensional images of proteins is considered. A topological approach is used to represent a protein structure as a spanning tree of critical points, where each critical point corresponds to a residue or the connectivity between residues. The critical points are subsequently analyzed to recognize secondary structure motifs within the protein. Results of applying the approach to ideal and experimental images of proteins at medium resolution are presented.

  2. Cytology 3D structure formation based on optical microscopy images

    NASA Astrophysics Data System (ADS)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  3. 3D ultrasound image segmentation using multiple incomplete feature sets

    NASA Astrophysics Data System (ADS)

    Fan, Liexiang; Herrington, David M.; Santago, Peter, II

    1999-05-01

    We use three features, the intensity, texture and motion to obtain robust results for segmentation of intracoronary ultrasound images. Using a parameterized equation to describe the lumen-plaque and media-adventitia boundaries, we formulate the segmentation as a parameter estimation through a cost functional based on the posterior probability, which can handle the incompleteness of the features in ultrasound images by employing outlier detection.

  4. Improved 3D cellular imaging by multispectral focus assessment